Developer's Closet A place where I can put my PHP, SQL, Perl, JavaScript, and VBScript code.

19Apr/160
Manage Traces in SQL Server

Quick notes on managing traces in SQL Server:

View all running traces:

SELECT * FROM sys.traces

Disable and delete the trace with the @status option = 2:

EXEC sp_trace_setstatus @traceid = 2 , @status = 0
EXEC sp_trace_setstatus @traceid = 2 , @status = 2

Use option 0 = disable, 1 = start, 2 = disable and delete.

Filed under: SQL No Comments
21May/150
Delete a Kafka Topic

Typical delete command:

kafka-topics --zookeeper zk_host:port/chroot --delete --topic my_topic_name

The topic will be "marked for deletion" and should be removed soon. What happens is an /admin/delete_topics/<topic> node is created in zookeeper and triggers a deletion. When the broker sees this update, the topic will no longer accept a new produce/consume request and eventually the topic will be deleted. If the delete command doesn't work right away, try restarting the Kafka service.

To delete a Kafka topic after the Broker has lost connection to the topic:

Manually delete the topic:

If the delete commands fails (marked for deletion forever):

1. Stop the Kafka Broker.

2. Delete the topic from disk.

For example:

sudo mv /space1/kafka/FourSquareSearchApiResults-0 /space1/kafka/FourSquareSearchApiResults-0.bak

3. Delete the topic znode from ZooKeeper:

 

Here are the ZooKeeper commands to use to delete the topic from ZK:

1. Log onto zkcli:

hbase zkcli

2. Search for topics:

ls /brokers/topics

rmr /brokers/topics/my_topic_name

ls /brokers/topics

3. Start Kafka.

4. You can then recreate the Kafka topic.

21Apr/150
Bash

Output in red:

echo "$(tput setaf 1)Red Text$(tput sgr0)"

Filed under: Linux No Comments
26Feb/150
PowerShell script to convert string to encrypted password and back again

I've found it useful to store an encrypted password in a file on disk. However, this is not generally a good idea but at least the password is encrypted to the user/machine. Here is a rough outline of the process:

-- convert to encrypted password from clear text
$password = ConvertTo-SecureString 'SomePassword' -AsPlainText –Force
-- write to file as user/machine encrypted string which cannot be unencrypted by any other user on any other machine
$password|convertFrom-SecureString|set-content "c:\temp\cred.txt"
$password

 

To make the file really useable, preface the password with the username and query for the username:

-- query for user name from file

$username = "UserName";
-- convert password to a secure string that can be sent through to Windows for authentication
$password = Get-Content "c:\temp\cred.txt" | Select-String $Username | foreach{$_ -replace $Username, ""} | foreach{$_ -replace " ",""} | ConvertTo-SecureString;

 

Finally, you can go a step further and crack open the password:

-- convert from encrypted password to clear text
$Ptr = [System.Runtime.InteropServices.Marshal]::SecureStringToCoTaskMemUnicode($password);
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringUni($Ptr);[System.Runtime.InteropServices.Marshal]::ZeroFreeCoTaskMemUnicode($Ptr);
$password
Filed under: PowerShell No Comments
9Feb/150
Find and Delete Running SQL Trace

Yesterday we had a long running trace take up too much space on disk. To find the running trace, look for the location of the file to identify your trace, and get the id:

SELECT * FROM sys.traces

Next, disable and delete the trace with the @status option = 2:

EXEC sp_trace_setstatus @traceid = # , @status = 2

Use option 0 = disable, 1 = start, 2 = disable and delete.

Have fun with traces, but make sure to stop them before you go home for the weekend!

Filed under: SQL No Comments
9Sep/140
Recover Lost Files on Linux

TestDisk includes PhotoRec which is very good at recovering files:

sudo apt-get install testdisk

To run PhotoRec:

sudo photorec

PhotoRec: http://www.cgsecurity.org/wiki/PhotoRec

 

Filed under: Linux, Ubuntu No Comments
6Aug/140
Copy data from one Hadoop cluster to another Hadoop cluster (running different versions of Hadoop)

I had to copy data from one Hadoop cluster to another recently. However, the two clusters ran different versions of Hadoop, which made using distcp a little tricky.

Some notes of distcp: By default, distcp will skip files that already exist in the destination, but they can be overwritten by supplying the -overwrite option. You can also update only files that have changed using the -update option. distcp is implemented as a MapReduce job where the work of copying is done by maps that run in parallel across the cluster. There are no reducers. Each file is copied by a single map, and distcp tries to give each map approximately the same amount of data, by bucketing files into roughly equal allocations.

The following command will copy the folder contents from one Hadoop cluster to a folder on another Hadoop cluster. Using hftp is necessary because the clusters run a different version of Hadoop. The command must be run on the destination cluster. Be sure your user has access to write to the destination folder.

hadoop distcp -pb hftp://namenode:50070/tmp/* hdfs://namenode/tmp/

Note: The -pb option will preserve the block size.

Double Note: For copying between two different versions of Hadoop we must use the HftpFileSystem, which is a read-only files system. So the distcp must be run on the destination cluster.

The following command will copy data from Hadoop clusters that are the same version.

hadoop distcp -pb hdfs://namenode/tmp/* hdfs://namenode/tmp/

Filed under: HDFS, Ubuntu No Comments
4Aug/140
Useful Scripts

PowerShell:

$ou = [adsi]"LDAP://OU=Marketing,OU=Departments,DC=Company,DC=Domain";
$user = $ou.psbase.get_children().find('CN=UserName');
$user.psbase.invokeSet("allowLogon",0);
$user.setinfo();

Bash:

#!/bin/bash
fname="/path/file"
tname="/new/file.tmp"
i="0"
DATE=$(date +%Y%m%d%H%M%S)
sudo cp "$fname" "$fname".$DATE

while IFS='' read -r line
do
if [ "$line" == " line item:" ] || [ $i -gt 0 -a $i -lt 4 ]
then
echo -e "#$line" | sudo tee -a $tname
i=$(($i + 1))
else
printf "%s\n" "$line" | sudo tee -a "$tname"
fi
done <"$fname"

sudo mv $tname $fname

Batch:

$SourcePath = 'C:\Path\';
$DestServer = 'ServerName';
$DestPath = '/path/';
$FileName = 'FileName';
$Output = @()
$cmd = @(
"y",
"lcd $SourcePath",
"cd $DestPath",
"mput $FileName",
"quit"
)

$Output = $cmd | & "C:\Program Files (x86)\Putty\psftp.exe" –v $DestServer 2>&1;
$Err = [String]($Output -like "*=>*");
If (($LastExitCode -ne 0) || (($Err.Contains("=>")) -eq $false)) {
throw "File Failed to Transfer! `n $($Output)";
}

Linux:

sudo mkdir /space;
echo "/dev/space /space ext4 defaults 0 0" | sudo tee -a /etc/fstab;
sudo mount /dev/space /space;
sudo df -h;
ls /dev/;

PowerShell:

Add-PSSnapin Quest.ActiveRoles.ADManagement;
connect-QADService -service domain;
set-QADuser UserName -TSRemoteControl 0;
$objCurrentPSProcess = [System.Diagnostics.Process]::GetCurrentProcess();
Stop-Process -Id $objCurrentPSProcess.ID;

 

18Jul/140
Sqoop Server Startup Failure: Upgrade required but not allowed

After an upgrade from CDH 5.0.2 to CDH 5.0.3, Sqoop failed to start with the following error: Server startup failure, Connector registration failed, Upgrade required but not allowed - Connector: generic-jdbc-connector.

To resolve this problem I had to add the following property to the Sqoop 2 Server Advanced Configuration Snippet (Safety Valve) for sqoop.properties. You can find this property under Cloudera Manager, Sqoop Service, Configuration, Sqoop 2 Server Default Group, and Advanced:

org.apache.sqoop.connector.autoupgrade=true

After the upgrade has completed successfully, the property can be removed.

Log File: /var/log/sqoop2/sqoop-cmf-sqoop-SQOOP_SERVER-servername01.ds-iq.corp.log.out

Server startup failure
org.apache.sqoop.common.SqoopException: CONN_0007:Connector registration failed
at org.apache.sqoop.connector.ConnectorManager.registerConnectors(ConnectorManager.java:236)
at org.apache.sqoop.connector.ConnectorManager.initialize(ConnectorManager.java:197)
at org.apache.sqoop.connector.ConnectorManager.initialize(ConnectorManager.java:145)
at org.apache.sqoop.core.SqoopServer.initialize(SqoopServer.java:51)
at org.apache.sqoop.server.ServerInitializer.contextInitialized(ServerInitializer.java:38)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4206)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4705)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:799)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:601)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:943)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:778)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:504)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1317)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:324)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1065)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:840)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1057)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:754)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: org.apache.sqoop.common.SqoopException: JDBCREPO_0026:Upgrade required but not allowed - Connector: generic-jdbc-connector
at org.apache.sqoop.repository.JdbcRepository$3.doIt(JdbcRepository.java:190)
at org.apache.sqoop.repository.JdbcRepository.doWithConnection(JdbcRepository.java:90)
at org.apache.sqoop.repository.JdbcRepository.doWithConnection(JdbcRepository.java:61)
at org.apache.sqoop.repository.JdbcRepository.registerConnector(JdbcRepository.java:156)
at org.apache.sqoop.connector.ConnectorManager.registerConnectors(ConnectorManager.java:216)
... 28 more

1Jul/140
HBase All Regions in Transition: state=FAILED_OPEN

After I added a jar file to the HBase Master I had a problem where regions failed to transition to a RegionServer. Below are the errors; removing the jar file from the hbase/lib folder resolved this problem (full path to jar: /opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/hbase/lib/). What tipped me off was the missing class definition: Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/ipc/CoprocessorProtocol.

Failed open of region=REGION.NAME,,4194066667839.6ea7d7ff9276f9c0e9b126c73e25bc54., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3970)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4276)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4249)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4205)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4156)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:475)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:140)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3967)
... 10 more
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/ipc/CoprocessorProtocol
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
...

9:10:19.721 AM INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler
Opening of region {ENCODED => 6ea7d7ff9276f9c0e9b126c73e25bc54, NAME => 'REGION.NAME,,4194066667839.6ea7d7ff9276f9c0e9b126c73e25bc54.', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 28