Developer's Closet A place where I can put my PHP, SQL, Perl, JavaScript, and VBScript code.

26Sep/170
Search for string and append string to file

A very useful string search and append that I find myself rewriting over and over. So I've decided to write a post and capture the script:

if grep -q sensu-client "/etc/rc.local"; then
echo "exists";
else
echo "does not exist"
sudo sed -i '/exit 0$/i sudo service sensu-client restart' /etc/rc.local;
fi

 

Filed under: Bash, Linux, Ubuntu No Comments
21Apr/150
Bash

Output in red:

echo "$(tput setaf 1)Red Text$(tput sgr0)"

Filed under: Linux No Comments
9Sep/140
Recover Lost Files on Linux

TestDisk includes PhotoRec which is very good at recovering files:

sudo apt-get install testdisk

To run PhotoRec:

sudo photorec

PhotoRec: http://www.cgsecurity.org/wiki/PhotoRec

 

Filed under: Linux, Ubuntu No Comments
6Aug/140
Copy data from one Hadoop cluster to another Hadoop cluster (running different versions of Hadoop)

I had to copy data from one Hadoop cluster to another recently. However, the two clusters ran different versions of Hadoop, which made using distcp a little tricky.

Some notes of distcp: By default, distcp will skip files that already exist in the destination, but they can be overwritten by supplying the -overwrite option. You can also update only files that have changed using the -update option. distcp is implemented as a MapReduce job where the work of copying is done by maps that run in parallel across the cluster. There are no reducers. Each file is copied by a single map, and distcp tries to give each map approximately the same amount of data, by bucketing files into roughly equal allocations.

The following command will copy the folder contents from one Hadoop cluster to a folder on another Hadoop cluster. Using hftp is necessary because the clusters run a different version of Hadoop. The command must be run on the destination cluster. Be sure your user has access to write to the destination folder.

hadoop distcp -pb hftp://namenode:50070/tmp/* hdfs://namenode/tmp/

Note: The -pb option will preserve the block size.

Double Note: For copying between two different versions of Hadoop we must use the HftpFileSystem, which is a read-only files system. So the distcp must be run on the destination cluster.

The following command will copy data from Hadoop clusters that are the same version.

hadoop distcp -pb hdfs://namenode/tmp/* hdfs://namenode/tmp/

Filed under: HDFS, Ubuntu No Comments
4Aug/140
Useful Scripts

PowerShell:

$ou = [adsi]"LDAP://OU=Marketing,OU=Departments,DC=Company,DC=Domain";
$user = $ou.psbase.get_children().find('CN=UserName');
$user.psbase.invokeSet("allowLogon",0);
$user.setinfo();

Bash:

#!/bin/bash
fname="/path/file"
tname="/new/file.tmp"
i="0"
DATE=$(date +%Y%m%d%H%M%S)
sudo cp "$fname" "$fname".$DATE

while IFS='' read -r line
do
if [ "$line" == " line item:" ] || [ $i -gt 0 -a $i -lt 4 ]
then
echo -e "#$line" | sudo tee -a $tname
i=$(($i + 1))
else
printf "%s\n" "$line" | sudo tee -a "$tname"
fi
done <"$fname"

sudo mv $tname $fname

Batch:

$SourcePath = 'C:\Path\';
$DestServer = 'ServerName';
$DestPath = '/path/';
$FileName = 'FileName';
$Output = @()
$cmd = @(
"y",
"lcd $SourcePath",
"cd $DestPath",
"mput $FileName",
"quit"
)

$Output = $cmd | & "C:\Program Files (x86)\Putty\psftp.exe" –v $DestServer 2>&1;
$Err = [String]($Output -like "*=>*");
If (($LastExitCode -ne 0) || (($Err.Contains("=>")) -eq $false)) {
throw "File Failed to Transfer! `n $($Output)";
}

Linux:

sudo mkdir /space;
echo "/dev/space /space ext4 defaults 0 0" | sudo tee -a /etc/fstab;
sudo mount /dev/space /space;
sudo df -h;
ls /dev/;

PowerShell:

Add-PSSnapin Quest.ActiveRoles.ADManagement;
connect-QADService -service domain;
set-QADuser UserName -TSRemoteControl 0;
$objCurrentPSProcess = [System.Diagnostics.Process]::GetCurrentProcess();
Stop-Process -Id $objCurrentPSProcess.ID;

 

1Jul/140
HBase All Regions in Transition: state=FAILED_OPEN

After I added a jar file to the HBase Master I had a problem where regions failed to transition to a RegionServer. Below are the errors; removing the jar file from the hbase/lib folder resolved this problem (full path to jar: /opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/hbase/lib/). What tipped me off was the missing class definition: Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/ipc/CoprocessorProtocol.

Failed open of region=REGION.NAME,,4194066667839.6ea7d7ff9276f9c0e9b126c73e25bc54., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3970)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4276)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4249)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4205)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4156)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:475)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:140)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3967)
... 10 more
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/ipc/CoprocessorProtocol
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
...

9:10:19.721 AM INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler
Opening of region {ENCODED => 6ea7d7ff9276f9c0e9b126c73e25bc54, NAME => 'REGION.NAME,,4194066667839.6ea7d7ff9276f9c0e9b126c73e25bc54.', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 28

30Jun/140
Cron file failed to load: (username~) ORPHAN (no passwd entry)

This problem bothers me a little. The authentication server failed during a cron job that referenced a specific account. Ubuntu could not authenticate the account, and id username failed. So the account was written to the /var/spool/cron/crontabs/ and any time I tried to edit the cron file under /etc/cron.c/username-cron-file, the reload would fail:

cron[17959]: (*system*username-cron-file) RELOAD (/etc/cron.d/username-cron-file)
cron[17959]: Error: bad username; while reading /etc/cron.d/username-cron-file
cron[17959]: (*system*username-cron-file) ERROR (Syntax error, this crontab file will be ignored)
cron[17959]: (username~) ORPHAN (no passwd entry)

I deleted the spool entry and was able to recreate the cron file.

 

Filed under: Linux, Ubuntu No Comments
26Nov/130
No Connection!! Clear Your ARP Table

A great article! This is rare, but sometimes you need to know:

http://zeldor.biz/2011/09/clear-arp-cache/

 

 

Filed under: Linux, Windows No Comments
7Nov/130
Find and delete files recursively in Linux

I needed this command today when my disk ran out of space:

sudo rm `sudo find /var/log -name '*.log.*'` -f

The command will delete every file that matches the pattern *.log.*.

Enjoy!

Filed under: Linux No Comments
30Aug/130
Setup Cloudera Manager Parcel Distribution from a Central Repository

Cloudera Manager supports parcels as an alternate form of distribution for CDH and other system packages. Among other benefits, parcels provide a mechanism for upgrading the packages installed on a cluster from within the Cloudera Manager Admin Console with minimal disruption.

You can upgrade individual parcels, or multiple parcels at the same time -- for example, upgrading CDH and Impala together.

All hosts in a Cluster point to the Cloudera Manager server to get their Parcels.

To setup a local Parcel repository:

  1. Browse to: http://archive.cloudera.com/cdh4/parcels/latest/ and download the Precise version of Hadoop
  2. Browse to: http://archive.cloudera.com/impala/parcels/latest/ and download the Precise version of Impala
  3. Browse to: http://beta.cloudera.com/search/parcels/latest/ and download the Precise version of SOLR

Move the files to the local Parcel repository: /opt/cloudera/parcel-repo

Note: The default location for the local parcel repository is /opt/cloudera/parcel-repo on the Cloudera Manager server. To configure this location, browse to Administration, Settings, Parcels.

  1. Open the manifest.json file which is in the same directory as the .parcel file you just copied. Find the section of the manifest that corresponds to the parcel you downloaded.
{
  "parcelName": "SOLR-0.9.3-1.cdh4.3.0.p0.366-precise.parcel",
  "components": [
    { "name":     "hbase-solr",
      "version":  "0.9.3-cdh4.3.0-SNAPSHOT",
      "pkg_version":  "0.9.3"
    }
    ,{ "name":     "solr",
      "version":  "4.3.0-cdh4.3.0_search0.9.3-SNAPSHOT",
      "pkg_version":  "4.3.0+74"
    }
  ],
  "hash": "66f5f2bf21334be97d30a47de83f9e37b56aebf4"
}
  1. Create a text file named after the parcel file with the .sha extension (for example: IMPALA-1.0.1-1.p0.431-el6.parcel.sha) and copy the hash code into it:
# cat > SOLR-0.9.3-1.cdh4.3.0.p0.366-precise.parcel.sha
66f5f2bf21334be97d30a47de83f9e37b56aebf4
^C
  1. Place the sha files into your local parcel repository: /opt/cloudera/parcel-repo
  2. Once these files are in place, Cloudera Manager will pick up the parcel and it will appear on the Hosts, Parcels page when you add a new host. Note that how quickly this occurs depends on the Parcel Update Frequency setting, set by default to 1 hour. You can change this on Administration, Settings, and Parcels.