Developer's Closet A place where I can put my PHP, SQL, Perl, JavaScript, and VBScript code.

Manage Traces in SQL Server

Quick notes on managing traces in SQL Server:

View all running traces:

SELECT * FROM sys.traces

Disable and delete the trace with the @status option = 2:

EXEC sp_trace_setstatus @traceid = 2 , @status = 0
EXEC sp_trace_setstatus @traceid = 2 , @status = 2

Use option 0 = disable, 1 = start, 2 = disable and delete.

Filed under: SQL No Comments
Delete a Kafka Topic

Typical delete command:

kafka-topics --zookeeper zk_host:port/chroot --delete --topic my_topic_name

The topic will be "marked for deletion" and should be removed soon. What happens is an /admin/delete_topics/<topic> node is created in zookeeper and triggers a deletion. When the broker sees this update, the topic will no longer accept a new produce/consume request and eventually the topic will be deleted. If the delete command doesn't work right away, try restarting the Kafka service.

To delete a Kafka topic after the Broker has lost connection to the topic:

Manually delete the topic:

If the delete commands fails (marked for deletion forever):

1. Stop the Kafka Broker.

2. Delete the topic from disk.

For example:

sudo mv /space1/kafka/FourSquareSearchApiResults-0 /space1/kafka/FourSquareSearchApiResults-0.bak

3. Delete the topic znode from ZooKeeper:


Here are the ZooKeeper commands to use to delete the topic from ZK:

1. Log onto zkcli:

hbase zkcli

2. Search for topics:

ls /brokers/topics

rmr /brokers/topics/my_topic_name

ls /brokers/topics

3. Start Kafka.

4. You can then recreate the Kafka topic.


Output in red:

echo "$(tput setaf 1)Red Text$(tput sgr0)"

Filed under: Linux No Comments
PowerShell script to convert string to encrypted password and back again

I've found it useful to store an encrypted password in a file on disk. However, this is not generally a good idea but at least the password is encrypted to the user/machine. Here is a rough outline of the process:

-- convert to encrypted password from clear text
$password = ConvertTo-SecureString 'SomePassword' -AsPlainText –Force
-- write to file as user/machine encrypted string which cannot be unencrypted by any other user on any other machine
$password|convertFrom-SecureString|set-content "c:\temp\cred.txt"


To make the file really useable, preface the password with the username and query for the username:

-- query for user name from file

$username = "UserName";
-- convert password to a secure string that can be sent through to Windows for authentication
$password = Get-Content "c:\temp\cred.txt" | Select-String $Username | foreach{$_ -replace $Username, ""} | foreach{$_ -replace " ",""} | ConvertTo-SecureString;


Finally, you can go a step further and crack open the password:

-- convert from encrypted password to clear text
$Ptr = [System.Runtime.InteropServices.Marshal]::SecureStringToCoTaskMemUnicode($password);
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringUni($Ptr);[System.Runtime.InteropServices.Marshal]::ZeroFreeCoTaskMemUnicode($Ptr);
Filed under: PowerShell No Comments
Find and Delete Running SQL Trace

Yesterday we had a long running trace take up too much space on disk. To find the running trace, look for the location of the file to identify your trace, and get the id:

SELECT * FROM sys.traces

Next, disable and delete the trace with the @status option = 2:

EXEC sp_trace_setstatus @traceid = # , @status = 2

Use option 0 = disable, 1 = start, 2 = disable and delete.

Have fun with traces, but make sure to stop them before you go home for the weekend!

Filed under: SQL No Comments
Recover Lost Files on Linux

TestDisk includes PhotoRec which is very good at recovering files:

sudo apt-get install testdisk

To run PhotoRec:

sudo photorec



Filed under: Linux, Ubuntu No Comments
Copy data from one Hadoop cluster to another Hadoop cluster (running different versions of Hadoop)

I had to copy data from one Hadoop cluster to another recently. However, the two clusters ran different versions of Hadoop, which made using distcp a little tricky.

Some notes of distcp: By default, distcp will skip files that already exist in the destination, but they can be overwritten by supplying the -overwrite option. You can also update only files that have changed using the -update option. distcp is implemented as a MapReduce job where the work of copying is done by maps that run in parallel across the cluster. There are no reducers. Each file is copied by a single map, and distcp tries to give each map approximately the same amount of data, by bucketing files into roughly equal allocations.

The following command will copy the folder contents from one Hadoop cluster to a folder on another Hadoop cluster. Using hftp is necessary because the clusters run a different version of Hadoop. The command must be run on the destination cluster. Be sure your user has access to write to the destination folder.

hadoop distcp -pb hftp://namenode:50070/tmp/* hdfs://namenode/tmp/

Note: The -pb option will preserve the block size.

Double Note: For copying between two different versions of Hadoop we must use the HftpFileSystem, which is a read-only files system. So the distcp must be run on the destination cluster.

The following command will copy data from Hadoop clusters that are the same version.

hadoop distcp -pb hdfs://namenode/tmp/* hdfs://namenode/tmp/

Filed under: HDFS, Ubuntu No Comments
Useful Scripts


$ou = [adsi]"LDAP://OU=Marketing,OU=Departments,DC=Company,DC=Domain";
$user = $ou.psbase.get_children().find('CN=UserName');


DATE=$(date +%Y%m%d%H%M%S)
sudo cp "$fname" "$fname".$DATE

while IFS='' read -r line
if [ "$line" == " line item:" ] || [ $i -gt 0 -a $i -lt 4 ]
echo -e "#$line" | sudo tee -a $tname
i=$(($i + 1))
printf "%s\n" "$line" | sudo tee -a "$tname"
done <"$fname"

sudo mv $tname $fname


$SourcePath = 'C:\Path\';
$DestServer = 'ServerName';
$DestPath = '/path/';
$FileName = 'FileName';
$Output = @()
$cmd = @(
"lcd $SourcePath",
"cd $DestPath",
"mput $FileName",

$Output = $cmd | & "C:\Program Files (x86)\Putty\psftp.exe" –v $DestServer 2>&1;
$Err = [String]($Output -like "*=>*");
If (($LastExitCode -ne 0) || (($Err.Contains("=>")) -eq $false)) {
throw "File Failed to Transfer! `n $($Output)";


sudo mkdir /space;
echo "/dev/space /space ext4 defaults 0 0" | sudo tee -a /etc/fstab;
sudo mount /dev/space /space;
sudo df -h;
ls /dev/;


Add-PSSnapin Quest.ActiveRoles.ADManagement;
connect-QADService -service domain;
set-QADuser UserName -TSRemoteControl 0;
$objCurrentPSProcess = [System.Diagnostics.Process]::GetCurrentProcess();
Stop-Process -Id $objCurrentPSProcess.ID;


Sqoop Server Startup Failure: Upgrade required but not allowed

After an upgrade from CDH 5.0.2 to CDH 5.0.3, Sqoop failed to start with the following error: Server startup failure, Connector registration failed, Upgrade required but not allowed - Connector: generic-jdbc-connector.

To resolve this problem I had to add the following property to the Sqoop 2 Server Advanced Configuration Snippet (Safety Valve) for You can find this property under Cloudera Manager, Sqoop Service, Configuration, Sqoop 2 Server Default Group, and Advanced:


After the upgrade has completed successfully, the property can be removed.

Log File: /var/log/sqoop2/sqoop-cmf-sqoop-SQOOP_SERVER-servername01.ds-iq.corp.log.out

Server startup failure
org.apache.sqoop.common.SqoopException: CONN_0007:Connector registration failed
at org.apache.sqoop.connector.ConnectorManager.registerConnectors(
at org.apache.sqoop.connector.ConnectorManager.initialize(
at org.apache.sqoop.connector.ConnectorManager.initialize(
at org.apache.sqoop.core.SqoopServer.initialize(
at org.apache.sqoop.server.ServerInitializer.contextInitialized(
at org.apache.catalina.core.StandardContext.listenerStart(
at org.apache.catalina.core.StandardContext.start(
at org.apache.catalina.core.ContainerBase.addChildInternal(
at org.apache.catalina.core.ContainerBase.addChild(
at org.apache.catalina.core.StandardHost.addChild(
at org.apache.catalina.startup.HostConfig.deployWAR(
at org.apache.catalina.startup.HostConfig.deployWARs(
at org.apache.catalina.startup.HostConfig.deployApps(
at org.apache.catalina.startup.HostConfig.start(
at org.apache.catalina.startup.HostConfig.lifecycleEvent(
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(
at org.apache.catalina.core.ContainerBase.start(
at org.apache.catalina.core.StandardHost.start(
at org.apache.catalina.core.ContainerBase.start(
at org.apache.catalina.core.StandardEngine.start(
at org.apache.catalina.core.StandardService.start(
at org.apache.catalina.core.StandardServer.start(
at org.apache.catalina.startup.Catalina.start(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at org.apache.catalina.startup.Bootstrap.start(
at org.apache.catalina.startup.Bootstrap.main(
Caused by: org.apache.sqoop.common.SqoopException: JDBCREPO_0026:Upgrade required but not allowed - Connector: generic-jdbc-connector
at org.apache.sqoop.repository.JdbcRepository$3.doIt(
at org.apache.sqoop.repository.JdbcRepository.doWithConnection(
at org.apache.sqoop.repository.JdbcRepository.doWithConnection(
at org.apache.sqoop.repository.JdbcRepository.registerConnector(
at org.apache.sqoop.connector.ConnectorManager.registerConnectors(
... 28 more

HBase All Regions in Transition: state=FAILED_OPEN

After I added a jar file to the HBase Master I had a problem where regions failed to transition to a RegionServer. Below are the errors; removing the jar file from the hbase/lib folder resolved this problem (full path to jar: /opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/hbase/lib/). What tipped me off was the missing class definition: Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/ipc/CoprocessorProtocol.

Failed open of region=REGION.NAME,,4194066667839.6ea7d7ff9276f9c0e9b126c73e25bc54., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
at java.lang.reflect.Constructor.newInstance(
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(
... 10 more
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/ipc/CoprocessorProtocol
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(

9:10:19.721 AM INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler
Opening of region {ENCODED => 6ea7d7ff9276f9c0e9b126c73e25bc54, NAME => 'REGION.NAME,,4194066667839.6ea7d7ff9276f9c0e9b126c73e25bc54.', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 28