Monday, November 29, 2010

Memory Leaks - Detection - Tools

http://www.glowcode.com/summary.htmDevelop and deliver high performance Windows and .NET applications with GlowCode, the fastest profiler on the market
http://www.softwareverify.com/cpp/memory/index.htmlC++ memory leak detector - Memory ValidatorMemory Validator is a memory leak and memory error detection software tool for use by software developers, software quality assurance testers and customer support staff.
http://sites.google.com/site/dmoulding/vldVisual Leak Detector is a free, robust, open-source memory leak detection system for Visual C++.
http://wiki.winehq.org/Wine_and_ValgrindValgrind (http://www.valgrind.org/) is a set of tools aimed at finding bugs and performance problems in programs. By default, it catches reads of uninitialized memory, accesses to inaccessible memory, and memory leaks.

Tuesday, December 09, 2008

Learning Language

www.coffeebreakspanish.com

www.coffeebreakfrench.com
http://www.word2word.com/coursead.html
http://www.bbc.co.uk/languages/
http://translateclient.googlepages.com/index.en.html

Language conversion tool any lang - any lang

http://babelfish.yahoo.com

Convertion Utlities.

ASCII Converter - http://www.mikezilla.com/exp0012.html
CalcEnstein - http://www.calcenstein.com/
Calculator Tab - http://www.calculator-tab.com/
ConvertIt - http://www.convertit.com/
eCalc - http://www.ecalc.com/
Flowmeter - http://www.flowmeterdirectory.com/flowmeter_unit_converter/index.htm
instacalc - http://my.instacalc.com
Online Calculators - http://www.martindalecenter.com/Calculators.html
unitconversion.org - http://www.unitconversion.org/
http://www.lewe.com/measure webMeasure converts the most common measurement units for you, including metric, imperial and US based values. With the menu on the left you can chose between linear, surface, volume and weight conversion. You can also chose other common sections where a conversion comes in handy once in a while, like clothing sizes

Thursday, October 30, 2008

Convert a Date/Time to a Unix timestamp and Vice versa

http://www.onlineconversion.com/unix_time.htm

Monday, October 06, 2008

MultiThreading and Synchronization & Java questions

http://paulbridger.net/multithreading_tutorial
http://www-128.ibm.com/developerworks/java/library/j-threads1.html
http://bobcat.webappcabaret.net/javachina/faq/01.htm
http://bobcat.webappcabaret.net/javachina/faq/09.htm#cpp_Q04

Create temporary files securely

* A temporary file must have a unique, unpredictable name.
* The name must still be unique when the file is created.
* The file must be opened with exclusive access.
* The file must be opened with appropriate permissions.
* The file must have an unpredictable name.
* The file must be removed before the program exits.
https://www.securecoding.cert.org/confluence/display/seccode/VOID+FI039-C.+Create+temporary+files+securely

Thursday, June 19, 2008

find command - Best Practices

Finds and change the permisions for directories, from the directory Iam now (current directory)
find . -type d -exec chmod 755 {} \;

For the files... (see, -d -directory- -f -files-)
find . -type f -exec chmod 644 {} \;

For example only change permissions to php files
find . -iname "*.php" -exec chmod 633 {} \;
or
find /home/username/xxx.yyy/* -type f -exec chmod 644 {} \;

If we only want to find files with 'up' at the start of their name, we use the '-name' argument.
So the following would be used:
$ find . -name up\*
./up1301.txt
./up1302.txt
./misc/uploads

Now we know there are files that should have their names in lowercase we can utilise find to get a list of files with names that aren't:

$ find -iname up\* -not -name up\*

To compile two lists, one containing the names of all .php files and the other the names of all .js files use:
$ find ~ -type f \( -name \*.php -fprint php_files ,
-name \*.js -fprint javascript_files \)

Pruning

Suppose you have a playlist file listing all David Gray .ogg files but there are a few albums you don't want included.
You can prevent those albums from going into the playlist by using the -prune action which works by attempting to match the names of directories against the given expression.
This example excludes the Flesh and Lost Songs albums :

$ find   \( -path  ./mp3/David_Gray/Flesh\* -o -path
"./mp3/David_Gray/Lost Songs" \* \) -prune -o -ipath \*david\ gray\*

Print me the way you want me, baby!

Changing the output information

If you want more than just the names of the files displayed, find's -printf action lets you have just about any type of information displayed. Looking at the man page there is a startling array of options.
These are used the most:
%pfilename, including name(s) of directory the file is in
%mpermissions of file, displayed in octal.
%fdisplays the filename, no directory names are included
%gname of the group the file belongs to.
%hdisplay name of directory file is in, filename isn't included.
%uusername of the owner of the file

As an example:

$ find . -name \*.ogg -printf %f\\n generates a list of the filenames of all .ogg files in and under the current directory.
The 'double backslash n' is important; '\n' indicates the start of a new line. The single backslash needs to be escaped by another one so the shell doesn't take it as one of its own.

Where to output information?


find has a set of actions that tell it to write the information to any file you wish. These are the -fprint, -fprint0 and -fprintf actions.

Thus

$ find . -iname david\ gray\*ogg -type f -fprint david_gray.m3u
is more efficient than
$ find . -iname david\ gray\*ogg -type f > david_gray.m3u

Execute!

File is an excellent tool for generating reports on basic information regarding files, but what if you want more than just reports? You could just pipe the output to some other utility:

$ find ~/oggs/ -iname \*.mp3 | xargs rm

This isn't all that efficient though.
It is much better to use the -exec action:

$ find ~/oggs/ -iname \*.mp3 -exec rm {} \;

It mightn't read as well, but it does mean the files are immediately deleted once found.
'{}' is a placeholder for the name of the file that has been found and as we want BASH to ignore the semicolon and pass it verbatim to find we have to escape it.

To be cautious, the -ok action can be used instead of -exec. The -ok action means you'll be asked for confirmation before the command is executed.

There are many ways these can be used in 'real life' situations:
If you are locked out from the default Mozilla profile, this will unlock you:

$ find ~/.mozilla -name lock -exec rm {} \;

To compress .log files on an individual basis:

$ find . -name \*.log -exec bzip {} \;

Give user ken ownership of files that aren't owned by any current user:

$ find . -nouser -exec chown ken {} \;

View all .dat files that are in the current directory with vim. Don't search any subdirectories.

$ vim -R `find . -name \*.dat -maxdepth 1`

Look for directories called CVS which are at least four levels below the current directory:

$ find -mindepth 4 -type d -name CVS

Time waits for no-one

You might want to search for recently created files, or grep through the last 3 days worth of log files.

Find comes into its own here: it can limit the scope of the files found according to timestamps.

Now, suppose you want to see what hidden files in your home directory changed in the last 5 days:

$ find ~ -mtime -5 -name \.\*

If you know something has changed much more recently than that, say in the last 14 minutes, and want to know what it was there's the mmin argument:

$ find ~ -mmin 14 -name \.\*

Be aware that doing a 'ls' will affect the access time-stamps of the files shown by that action. If you do an ls to see what's in a directory and try the above to see what files were accessed in the last 14 minutes all files will be listed by find.

To locate files that have been modified since some arbitrary date use this little trick:

$ touch -d "13 may 2001 17:54:19" date_marker $ find . -newer date_marker

To find files created before that date, use the cnewer and negation conditions:

$ find . \! -cnewer date_marker

To find a file which was modified yesterday, but less than 24 hours ago:

$ find . -daystart -atime 1 -maxdepth

The -daystart argument means the day starts at the actual beginning of the day, not 24 hours ago.
This argument has meaning for the -amin, -atime, -cmin, ctime, -mmin and -mtime options.


inding files of a specific size

A file of character (bytes)

To locate files that have a certain amount of characters present then you can't go far wrong with

# find files with exactly 1000 characters $ find . -size 1000c #find files containing between 600 to 700 characters, inclusive. $ find . -size +599c -and -size -701c 'Characters' is a misnomer: 'c' is find's shorthand for bytes; thus this will only work for ASCII text not Unicode.

Consulting the man page we see
c = bytes
w = 2 byte words
k = kilobytes
b = 512-byte blocks

Thus we can use find to list files of a certain size:

$ find /usr/bin -size 48k

Empty files

You can find empty files with $ find . -size 0c
Using the -empty argument is more efficient.

To delete empty files in the current directory:

$ find . -empty -maxdepth 1 -exec rm {} \;

Users & Groupies

Users

To locate files belonging to a certain user:
# find /etc -type f \!  -user root -exec ls -l {} \;
-rw------- 1 lp sys 19731 2002-08-23 15:04 /etc/cups/cupsd.conf
-rw------- 1 lp sys 97 2002-07-26 23:38 /etc/cups/printers.conf

A subset of that same information, without having the cost of an exec:

root@ttyp0[etc]# find /etc -type f \!  -user root \
-printf "%h/%f %u\\n"
/etc/cups/cupsd.conf lp
/etc/cups/printers.conf lp

If you know the uid and not the username then use the -uid argument:

$ find /usr/local/htdocs/www.linux.ie/ -uid 401

-nouser means there is no user in the /etc/passwd file for the files in question.

Groupies

find can locate files that belong to a specific group - or not, depending on how you use it.
This is especially suited to tracking down files that should belong to the www group but don't:

$ find /www/ilug/htdocs/  -type f \! -group  www

The -nogroup argument means there is no group in the /etc/group file for the files in question.
This may arise if a group is removed from the /etc/group file sometime after it's been used.
To search for files by the numerical group ID use the -gid argument:

$ find -gid 100

Permissions

If you've ever had one or more shell scripts not work because their execute bits weren't set and want to sort things out for once and for all, then you should like this little example:

knoppix@ttyp1[bin]$ ls -l ~/bin/
total 8
-rwxr-xr-x 1 knoppix knoppix 21 2004-01-20 21:42 wl
-rw-r--r-- 1 knoppix knoppix 21 2004-01-20 21:47 ww

knoppix@ttyp1[bin]$ find ~/bin/ -maxdepth 1 -perm 644 -type f \
-not -name .\*
/home/knoppix/bin/ww

Find locates the file that isn't set to execute, as we can see from the output of ls.

Types of files

The '-type' argument obviously specifies what type of file find is to go looking for (remember in Linux absolutely everything is represented as some type of file).
So far I've been using '-type f' which means search for normal files.

If we want to locate directories with '_of_' in their name we'd use:

$ find . -type d -name '*_of_*'

The list generated by this won't include symbolic links to directories.
To get a list including directories and symbolic links:

$ find . \( -type d -or -type l \) -name '*_of_*'

For a complete list of types check the man page.

Regular expressions

Thus far we've been using casual wildcards to specify certain groups of files. Find also support regular expressions, so we can use more advanced criteria with regards to locating files. The matching expression must apply to the entire path:

ken@gemmell:/home/library/code$ find . -regex '.*/mp[0-4].*'
./library/sql/mp3_genre_types.sql

The -regex test has a case insensitive counterpart, -iregex.

There is a little gotcha with using regular expressions: You must allow for the full path of the files found, even if find is to search the current directory:

$ cd /usr/share/doc/samba-doc/htmldocs/using_samba
$ find . -regex './ch0[1-2]_0[1-3].*'
./ch01_01.html
./ch01_02.html
./ch02_01.html
./ch02_02.html
./ch02_03.html

Limiting by filesytem

As an experiment, get a MS formatted floppy disk and mount it as root:

$ su -
# mount /floppy
# mount
/dev/sda2 on / type ext2 (rw,errors=remount-ro)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/fd0 on /floppy type msdos (rw,noexec,nosuid,nodev)

Now try

$ find / -fstype msdos -maxdepth 1 
You should see only /floppy listed.
To get the reverse of this, ie a listing of directories that are not on msdos file-systems, use
$ find / -maxdepth 1 \( -fstype msdos \) -prune -or -print
This is a start on limiting the files found by system type.

Saturday, June 14, 2008

QEMU

QEMU is a generic and open source machine emulator and virtualizer.

When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performances.

When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. A host driver called the QEMU accelerator (also known as KQEMU) is needed in this case. The virtualizer mode requires that both the host and guest machine use x86 compatible processors.

The supported host and target CPUs are listed in the status page.

http://bellard.org/qemu/about.html

Thursday, May 29, 2008

Process Execution( Programming)

What happens when a process is executed.
1. The Kernel will load the required program ( a ELF Object) into memory and also load the runtime linker ( ld.so.1(1)) into memory.
2. The kernel then transfers control initially to the runtime linker.
3. Its runtime linkers job to examine the program loaded and find any dependencies it has ( in the form of a shared objects), load those shared objects into memory , and then bind all of the symbol binds ( function calls, data references, etc...) from the program to each of those dependencies. Of course , as it loads each shared object it must in turn do the same examination on each of them and load any dependencies they require.
4. Once all of the dependencies are loaded and their symbols have been found - the runtime linker will fire the .init sections for each shared object loaded and finally transfer control to the executable, which call main().

What is interposition ( Programming)

Suppose a process is made up of several shared objects, and two shared objects, libX.so and libY.so, export the same symble xy(). under the traditional symbol search model any references to the symbol xy() will be bould to the first instance xy() that is found. So , if libX.so is loaded before libY.so then the instance of xy() within libX.so is used to satisfy all references. The instace of xy() within libX.so is said to interpose on the instance in libY.so

Notes on setting up Cygwin

Get the "setup.exe" tool from www.cygwin.com.

*Setup will lead you through a series of choices (keep the default choices in general for directory names and to install from the internet, but you'll need to pick a local mirror site for the files), and
will let you choose the packages you want to install with a nice GUI.

What I installed

  • I wanted TeX and program development environments, and like to work in X-windows. From the GUI that runs from the setup.exe program (it takes a while for setup to scan the archive the first time) I selected the following files beyond the defaults in addition to the files set by the default settings in each category (setup is smart: selecting something will select most but sometimes not all other necessary files, but the setup program checks for links at the end)
    • Admin
      • cron
    • Devel
      • cvs
      • gcc-g77
      • make
      • (bison, byak, and flex)
    • Editors
      • emacs
      • emacs-X11
      • vim
    • Graphics
      • ghostscript-x11
      • gv
    • Interpreters
      • (gawk; sed is in the Base default)
    • Net
      • openssh
      • rsync
    • Publishing
      • tetex (then don't forget to run texconfig to select paper size and other default options)
      • tetex-base
      • tetex-extras (for BibTeX stuff in my case; run texconfig rehash so TeX knows where to find the extra files)
      • tetex-x11
    • Shells
      • tcsh
    • Text
      • aspell
      • enscript
      • more
    • X11 (no subdirectories, select install for the whole package; the default option isn't quite enough for what I'm doing)
Cygwin's package search page, www.cygwin.com/packages/, is very useful finding missing files or commands. The setup program is smart and automatically gets most of the dependent files, but when you're looking for something you want or are missing...

Path work

Add or set the Windows environment variable HOME to c:\ -- this is where you wind up when you type cd at a Cygwin prompt.
  • The Cygwin initial paths and other setup parameters are in c:\cygwin\etc\profile. After a little editing, my Cygwin path is something like: PATH="/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:.:$PATH" In previous versions /usr/X11R6/bin had to be first in the list to pick up the version of ghostscript (gs) that works in X-Windows, but this seems to have changed in recent releases. The problem before was that a version that didn't have an x11 device was ahead of the X version in the default path. I also added the reference to the current directory at the end of the Cygwin native path but before the Windows path. A full copy of my profile file is below and can be used as a model
Starting the X-Windows manager
refer http://www.astro.umd.edu/~harris/cygwin/index.html

Initialization files

For an XP installation, here are my personalized versions of:
  • profile (modified version of c:\cygwin\etc\profile, written over original version, now saved as profile.org)
  • .xinitrc (modified version of c:\cygwin\etc\X11\xinit\xinitrc saved as c:\.initrc)
  • .cshrc (modified version of c:\cygwin\etc\csh.cshrc saved as c:\.cshrc)
  • .login (modified version of c:\cygwin\etc\csh.login saved as c:\.login
  • .emacs (an emacs initialization file that works for me -- there are many other examples and discussions on the web)
  • startxwin (modified version of c:\cygwin\usr\X11R6\bin\startxwin.sh saved as c:\cygwin\usr\X11R6\startxwin)

Notes on rsync

To rsync to another disk, for instance disk e: (easy to see which are available with the df command):
rsync -avu --delete sourcedir/ /cygwin/e/destdir/
This command removes files that have been removed from the source directory but will not overwrite newer files in the destination,

To rsync to another system with ssh over the net:
rsync -avu --delete -e ssh sourcedir/ username@machine:~/destdir/

To avoid typing passwords for each network transfer:
  • Generate key for ssh with ssh-keygen. Take all defaults including a blank passphrase (otherwise you'll want a passphrase and to invoke an ssh agent, a good idea if you have any security concerns past the most basic ones). Keep track of the file locations.
  • Copy the generated file, id_rsa.pub, to the ~/.ssh directory on the remote machine. Rename it or append it to a file titled authorized_keys. The file must be read-write for the owner only (chmod 600).
  • It is possible to edit this file to restrict access to this mode following instructions in this link.
  • The counterpart to the public file is id_rsa; that may be copied (securely!) to other local machines so you can log in from them as well.

Notes on rsync

To rsync to another disk, for instance disk e: (easy to see which are available with the df command):
rsync -avu --delete sourcedir/ /cygwin/e/destdir/
This command removes files that have been removed from the source directory but will not overwrite newer files in the destination,

To rsync to another system with ssh over the net:
rsync -avu --delete -e ssh sourcedir/ username@machine:~/destdir/

To avoid typing passwords for each network transfer:
  • Generate key for ssh with ssh-keygen. Take all defaults including a blank passphrase (otherwise you'll want a passphrase and to invoke an ssh agent, a good idea if you have any security concerns past the most basic ones). Keep track of the file locations.
  • Copy the generated file, id_rsa.pub, to the ~/.ssh directory on the remote machine. Rename it or append it to a file titled authorized_keys. The file must be read-write for the owner only (chmod 600).
  • It is possible to edit this file to restrict access to this mode following instructions in this link.
  • The counterpart to the public file is id_rsa; that may be copied (securely!) to other local machines so you can log in from them as well.

Grep - global/regular expression/print

Grep has several options, some of those.
-l For listing the names of matching files.
-r Search the directories recursively.
-e If the pattern has leading '-' character.
-w If you want to search whole word, not a part of word.
-C [2-8] prints the two lines of context around each matching line.
/dev/null Appending at the end, forces grep to print the name of the file.
-a or --binary-files=text Forces grep to output the lines even from the files that appear to be binary files.
-I or --binary-files=without-match For eliminating binary file matches.
-lv Lists the names of all files containing one or more lines that do not match.
-L or --files-without-match To list the names of all files that contain no matching lines.

fgrp stands for Fixed grep,egrep is for Extended grep
There are four major variants of grep, controlled by the following options.
`-G'
`--basic-regexp'
Interpret the pattern as a basic regular expression. This is the default.

`-E'
`--extended-regexp'
Interpret the pattern as an extended regular expression.

`-F'
`--fixed-strings'
Interpret the pattern as a list of fixed strings, separated by newlines, any of which is to be matched.

`-P'
`--perl-regexp'
Interpret the pattern as a Perl regular expression.
Examples:

  1. How can I list just the names of matching files?


    grep -l 'main' *.c

    lists the names of all C files in the current directory whose contents mention `main'.

  2. How do I search directories recursively?


    grep -r 'hello' /home/gigi

    searches for `hello' in all files under the directory `/home/gigi'. For more control of which files are searched, use find, grep and xargs. For example, the following command searches only C files:


    find /home/gigi -name '*.c' -print | xargs grep 'hello' /dev/null

    This differs from the command:


    grep -r 'hello' *.c

    which merely looks for `hello' in all files in the current directory whose names end in `.c'. Here the `-r' is probably unnecessary, as recursion occurs only in the unlikely event that one of `.c' files is a directory.

  3. What if a pattern has a leading `-'?


    grep -e '--cut here--' *

    searches for all lines matching `--cut here--'. Without `-e', grep would attempt to parse `--cut here--' as a list of options.

  4. Suppose I want to search for a whole word, not a part of a word?


    grep -w 'hello' *

    searches only for instances of `hello' that are entire words; it does not match `Othello'. For more control, use `\<' and `\>' to match the start and end of words. For example:


    grep 'hello\>' *

    searches only for words ending in `hello', so it matches the word `Othello'.

  5. How do I output context around the matching lines?


    grep -C 2 'hello' *

    prints two lines of context around each matching line.

  6. How do I force grep to print the name of the file?

    Append `/dev/null':


    grep 'eli' /etc/passwd /dev/null

    gets you:


    /etc/passwd:eli:DNGUTF58.IMe.:98:11:Eli Smith:/home/do/eli:/bin/bash

  7. Why do people use strange regular expressions on ps output?


    ps -ef | grep '[c]ron'

    If the pattern had been written without the square brackets, it would have matched not only the ps output line for cron, but also the ps output line for grep. Note that some platforms ps limit the ouput to the width of the screen, grep does not have any limit on the length of a line except the available memory.

  8. Why does grep report "Binary file matches"?

    If grep listed all matching "lines" from a binary file, it would probably generate output that is not useful, and it might even muck up your display. So GNU grep suppresses output from files that appear to be binary files. To force GNU grep to output lines even from files that appear to be binary, use the `-a' or `--binary-files=text' option. To eliminate the "Binary file matches" messages, use the `-I' or `--binary-files=without-match' option.

  9. Why doesn't `grep -lv' print nonmatching file names?

    `grep -lv' lists the names of all files containing one or more lines that do not match. To list the names of all files that contain no matching lines, use the `-L' or `--files-without-match' option.

  10. I can do OR with `|', but what about AND?


    grep 'paul' /etc/motd | grep 'franc,ois'

    finds all lines that contain both `paul' and `franc,ois'.

  11. How can I search in both standard input and in files?

    Use the special file name `-':


    cat /etc/passwd | grep 'alain' - /etc/motd

  12. How to express palindromes in a regular expression?

    It can be done by using the back referecences, for example a palindrome of 4 chararcters can be written in BRE.


    grep -w -e '\(.\)\(.\).\2\1' file

    It matches the word "radar" or "civic".

    Guglielmo Bondioni proposed a single RE that finds all the palindromes up to 19 characters long.


    egrep -e '^(.?)(.?)(.?)(.?)(.?)(.?)(.?)(.?)(.?).?\9\8\7\6\5\4\3\2\1$' file

    Note this is done by using GNU ERE extensions, it might not be portable on other greps.

  13. Why are my expressions whith the vertical bar fail?


    /bin/echo "ba" | egrep '(a)\1|(b)\1'

    The first alternate branch fails then the first group was not in the match this will make the second alternate branch fails. For example, "aaba" will match, the first group participate in the match and can be reuse in the second branch.

Monday, May 19, 2008

Unix useful Commands and traces

*You also can use the " free -m" to see the memory status in megabytes. Change the 'm' to 'k' or 'g' to see it in kilobytes and gigabytes respectively.
*You can use the "watch free" command to see the memory usage in real time. But the display in only in KBs.
* Use the command "fdisk -l" to list all drives you have. This will list the USB devices you have connected as well. It gives the details of partiotions on each disk.

**Try running every command u learn using 'strace' eg 'strace ls' - that would show you the system calls that a command makes and would give you more insights into the working of system/kernel. it would be helpful if you u wish to take system programming.
similarly 'ltrace ' would show you library calls a program makes.

i would suggest to use the "lshw" command, this will give all the info about all the h/w present on your computer

Friday, April 11, 2008

What do BTW, FAQ, FYI, IMHO, RTFM, and other acronyms mean?

These are all abbreviations for specific phrases commonly used in informal written computer correspondence, online forums and boards, and online gaming. Following are some common acronyms and their meanings:
AFAIC As far as I'm concerned
AFAIK As far as I know
AFK Away from keyboard
BRB Be right back
BTDT Been there, done that
BTW By the way
BUAG Butt-ugly ASCII graphic
C/C Comments and criticism
EOM End of message
FAQ Frequently Asked Question. When people say "the FAQ", they are generally referring to a list of answers to Frequently Asked Questions. These are posted monthly on many newsgroups or mailing lists to reduce discussion of topics that have already been thoroughly covered. It's a good idea to look at a FAQ file for a newsgroup or mailing list before participating in it. For help in finding FAQ files, see Where can I find a repository of Usenet FAQ files? A large list of all known FAQ postings in newsgroups is also posted periodically in the Usenet newsgroup news.admin.
FTW For the win
FWIW For what it's worth
FYI For your information
HTH Hope this helps
IANAL I am not a lawyer
IIRC If I recall correctly
IMHO In my humble opinion
IMNSHO In my not so humble opinion
IMO In my opinion
IOW In other words
l33t or 1337 From "elite". This has become a term used to describe the informal communication of Internet gaming. L33t speak is easily identified by the substitution of number and other characters for regular letters; e.g., hackers becomes h4XX0rz.
LFG Looking for group, usually used in MMORPGs
LMAO Laughing my butt off
LOL Laughing out loud
MMORPG Massive, multiplayer, online role-playing game, such as World of Warcraft or Star Wars Galaxies
MOTAS Member of the appropriate sex
MOTOS Member of the opposite sex
MOTSS Member of the same sex
NG Newsgroup
n00b From "newbie", meaning a newcomer not yet familiar with the rules
OMG Oh my God
OTOH On the other hand
PWN Usage of the term "own", as in "I PWNed you!"
QQ Cry more, noob (representation of eyes crying, often found in MMORPGs)
RL Real Life, as opposed to the Internet
ROFL Rolling on the floor laughing
ROFLMAO Rolling on the floor laughing my butt off
RTFM Read The Fine Manual. This may be interpreted as: "You have asked a question which would best be answered by consulting the manual (or FAQ, or other help files), a copy of which should be in your possession. The question you have asked is clearly answered in the manual and you are wasting time asking people to read it to you." It's good netiquette to mail this type of answer to another user rather than post it in public messages.
SO Significant other, used to refer to someone's romantic partner without making any assumptions about gender or legal status
TLA Three letter acronym
TTFN Ta ta for now
TTYL Talk to you later
W/E Whatever
w00t An expression of joy
WFN Wrong forum, noob
WTF What the heck
YMMH You might mean here
YMMV Your mileage may vary
{g} Grin
{BG} Big grin

Thursday, April 10, 2008

Simple binds aren't always so simple

In my last posting, I showed how to call the low-level API exposed by WLDAP32.DLL to authenticate via an LDAP bind. The authentication function - ldap_simple_bind_s() - returns a 0 when the credentials supplied were successfully authenticated. I left out what happens when the authentication function returns an error code. It turns out that determining what caused your authentication call to fail can be a bit subtle – at least when the directory you’re binding is Active Directory.

PLEASE NOTE: I’m sharing this information because I don’t want any to have to figure this out by the means I did (quality time with ADSIEdit, a test domain controller loaded in a Virtual PC, and lots of VBScript fragments pulled from around the Internet.)

First, here’s what can go wrong when calling ldap_simple_bind_s() using a connection to an Active Directory LDAP server:

And here’s how to tell them apart:

  • Any error other than LDAP_INVALID_CREDENTIALS implies that it’s not any of the other cases I’ve listed
  • If ldap_simple_bind_s() returns LDAP_INVALID_CREDENTIALS
    • Test for the disabled account condition by checking if the userAccountControl attribute for that user has ACCOUNTDISABLE (0x0000002) bit set. If it’s set, the account is disabled.
    • Test for the expired account condition by checking if the accountExpires attribute has an invalid value – this is a little bit tricky, as it’s a 64 bit integer field that may or may not be present (because account may or may not have an expiration date.) If it’s present, you’ll need to convert it to a date/time value and compare it to the current system date/time. It gets even more subtle than that, but that’s the subject of a future posting…
    • Test for the locked out account condition by checking the lockoutTime attribute. If it comes back with a non-zero value, the account is locked out.
    • Test for the must change password condition by checking the pwdLastSet attribute. If it comes back with a 0 value, the password must be reset at next login. There’s a flag in the userAccountControl attribute that looks like it corresponds to this condition, but in my experience, I’ve found that this is the only reliable way to tell.
    • Test for the expired password condition by checking the pwdLastSet attribute against domain policy. This also involves date arithmetic with 64 bit integer attributes, which I’ll cover in a future posting.
    • If all those tests fail, you can safely presume that you’ve been given a bad password.

As you’ve probably surmised, I had write an application that had a critical need to tell the difference between all this conditions and was surprised at how hard it was to do so. Hopefully, this will help someone else charged with the same task (or hopefully, no one else will ever have to do this.) At any rate, that’s how to tell why your LDAP bind to Active Directory didn’t work.

Somewhere, the system administrators of the world are laughing at me. J

jmp

Tuesday, February 26, 2008

Some useful Utilities

Tech info section - http://aruljohn.com/info/
games section new
NETWORK: IP address tracking,telephone tracking,MAC address lookup,IP/CIDR subnet,IP to hostname,hostname to IP,view HTTP headers
GEOGRAPHIC: weather forecast,zip code lookup,areacode lookup,country information
MISCELLANEOUS:word pronunciation,browser language,phishing website test,gzip compression test,html color picker,stock quotes lookup,proxy server list,news feed (rss)

Monday, February 18, 2008

National Program on technology enhanced learning

National Program on technology enhanced learning is initiative of
IIT's and IISC to make there coursware available for free of cost on
internet.
As part of intitative, lecture videos of IIT's and IISC ofdifferent
courses have been placed on youtube site

http://www.youtube.com/user/nptelhrd

stack dump vs core dump

Stack dump, as the name suggests, only contains the stack information of
the executable that generated it, at the point in time it was generated.

Core dump contains information about memory and registers apart from the
stack info.

You can generate a core dump of a running executable using 'gcore' on linux or
'dumper' on cygwin.If you questions is about programatically generating core dump, you
would need to take a look atgcore's implementation to see what it does.

Monday, January 28, 2008

Ruby, Io, PHP, Python, Lua, Java, Perl, Applescript, TCL, ELisp, Javascript, OCaml, Ghostscript, and C Fractal Benchmark

Ruby, Io, PHP, Python, Lua, Java, Perl, Applescript, TCL, EL...

Ruby, Io, PHP, Python, Lua, Java, Perl, Applescript, TCL, ELisp, Javascript, OCaml, Ghostscript, and C Fractal Benchmark

I've always enjoyed fractals, and was curious if scripting languages were up to the task. I wrote a very simple Mandelbrot set generator for my test. Rather than optimizing for each language, I tried to write each program in approximately the same way in each language to make a reasonable performance comparison.

Here are the results from running on my 867 mhz Powerbook G4. Shorter is better. Please note, the following benchmarks are not scientific, and were simply done to satisfy my curiosity. Your mileage may vary.

Feel free to send me ports to any other languages.

Labels:

Zimbra Collaboration Suite 5.0

Zimbra on your Desktop

Zimbra Desktop is the next generation leap forward for Web 2.0 applications- now you can have Zimbra's Ajax-based collaboration experience online and offline. That means when you are out of the office without a connection (say, in a plane, train, or automobile), you can keep working without missing a beat. Write email, add new appointments, edit documents and when you re-connect changes will be automatically synced to the Zimbra Server.

Zimbra Desktop benefits:

  • The better overall usability of Web 2.0 (conversation view, tags, Zimlets) comes to the desktop; plus the web and desktop experience are now the same
  • Switch from online to offline mode seamlessly and automatically; when online you are immune to hiccups and interruptions caused by server latency
  • Faster search, better rich mail rendering, and a self-organizing inbox more adept handling larger email volumes than traditional clients (no more 2GB mailbox limits!)
  • Significantly reduced administration overhead managing and maintaining local files; they are synced to the Zimbra Server where they can be safeguarded
  • Fundamentally a cross browser, cross platform solution (Windows, Apple, Linux)
  • Expensive investments in proprietary clients are no longer required

Labels:

Wednesday, December 05, 2007

JMeter

JMeter - Apache JMeter

Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.

Labels:

Tuesday, October 09, 2007

Destributed Computing

Google, IBM promote 'cloud' computing at universities

Google Inc. and IBM have teamed up to offer a curriculum and support for software development on large-scale distributed computing systems, with six universities signing up so far.

The program is designed to help students and researchers get experience working on Internet-scale applications, the companies said. The relatively new form of parallel computing, sometimes called cloud computing, hasn't yet caught on in university settings, said Colleen Haikes, an IBM spokeswoman.

"Right now, although the technique is being used in industry, it's not being taught in universities," she said.

IBM and Google are providing hardware, software and services to add to university resources, the two companies said.

The University of Washington signed up with the program late last year. This year, five more schools, including MIT, Stanford University and the University of Maryland, have joined the program. The two companies expect to expand the program to other universities in the future.

The program focuses on parallel computing techniques that take computational tasks and break them into hundreds or thousands of smaller pieces to run across many servers at the same time. The techniques allow Web applications such as search, social networking and mobile commerce to run quickly, the companies said in a press release.

IBM and Google have dedicated a cluster of several hundred computers -- including PCs donated by Google and IBM BladeCenter servers -- and the companies expect the cluster to grow to more than 1,600 processors.

The companies call these clusters "cloud" computing. A cloud is a collection of machines that can serve as a host for a variety of applications, including interactive Web 2.0 applications. Clouds support a broader set of applications than do traditional computing grids, because they allow various kinds of middleware to be hosted on virtual machines distributed across the cloud, Haikes said.

IBM and Google have created several resources for the program, including the following:

  • A cluster of processors running an open-source version of Google's published computing infrastructure, including MapReduce and GFS from Apache's Hadoop project, a software platform that lets one easily write and run applications that process vast amounts of data.
  • A Creative Commons-licensed curriculum on parallel computing developed by Google and the University of Washington.
  • Open-source software designed by IBM to help students develop programs for clusters running Hadoop. The software works with Eclipse, an open-source development platform.

Labels:

ValGrind


Valgrind Home

Valgrind is an award-winning suite of tools for debugging and profiling Linux programs. With the tools that come with Valgrind, you can automatically detect many memory management and threading bugs, avoiding hours of frustrating bug-hunting, making your programs more stable. You can also perform detailed profiling, to speed up and reduce memory use of your programs.

The Valgrind distribution currently includes four tools: a memory error detector, a cache (time) profiler, a call-graph profiler, and a heap (space) profiler. It runs on the following platforms: X86/Linux, AMD64/Linux, PPC32/Linux, PPC64/Linux.

Labels:

Thursday, August 23, 2007

Determining IP information for eth0... failed;

Determining IP information for eth0... failed

Determining IP information for eth0... failed; no link present. Check
cable?" error message. This is known problem since RedHat 9.0 and documented in
VMware Knowledgebase with solution.

This can be fixed by adding following line to end of /etc/sysconfig/network-
scripts/ifcfg-eth0 "check_link_down() { return 1; }" and restarting network
service.

Labels:

Tuesday, July 31, 2007

Destributed Version System

Wednesday, July 11, 2007

Base64


Base64 encoding makes it possible to send all kinds of data via Internet email.

If the internet is the information highway, then the path for email is a narrow ravine. Only very small carts can pass.

The transport system of email is designed for plain ASCII text only. Trying to send text in other languages or arbitrary files is like getting a truck through the ravine.

How Does the Big Truck go Through the Ravine?

Base64 encoding makes it possible to send all kinds of data via Internet email.Base64 Base 64encoding makes it possible to send all kinds of data via Internet email.



If the internet is the information highway, then the path for email is a narrow ravine. Only very small carts can pass.

The transport system of email is designed for plain ASCII text only. Trying to send text in other languages or arbitrary files is like getting a truck through the ravine.

How Does the Big Truck go Through the Ravine?

Then how do you send a big truck through a small ravine? You have to take it to pieces on the one end, transport the pieces through the ravine, and rebuild the truck from the pieces on the other end.

The same happens when you send a file attachment via email. In a process known as encoding the binary data is transformed to ASCII text, which can be transported in email without problems.

On the recipient's end, the data is decoded and the original file is rebuilt.

One method of encoding arbitrary data as plain ASCII text is Base64. It is one of the techniques employed by the MIME standard to send data other than plain text.

Labels:

Wednesday, June 20, 2007

SAML Artifact Authentication

Yoono, People Powered
SAML
Sets of rules describing how to embed and extract SAML assertions into a framework or protocol are
called profiles of SAML. A profile describes how SAML assertions are embedded in or combined with
other objects (for example, files of various types, or protocol data units of communication protocols) by an
originating party, communicated from the originating site to a destination, and subsequently processed at
the destination. A particular set of rules for embedding SAML assertions into and extracting them from a
specific class of <FOO> objects is termed a <FOO> profile of SAML.

Two HTTP-based techniques are used in the web browser SSO profiles for conveying information from
 one site to another via a standard commercial browser.
 • SAML artifact: A SAML artifact of “small” bounded size is carried as part of a URL query string such
that, when the artifact is conveyed to the source site, the artifact unambiguously references an
assertion. The artifact is conveyed via redirection to the destination site, which then acquires the
referenced assertion by some further steps. Typically, this involves the use of a registered SAML
protocol binding. This technique is used in the browser/artifact profile of SAML.
 • Form POST: SAML assertions are uploaded to the browser within an HTML form and conveyed to
the destination site as part of an HTTP POST payload when the user submits the form. This
technique is used in the browser/POST profile of SAML.
Cookies are not employed in any profile, as cookies impose the limitation that both the source and
destination site belong to the same "cookie domain."


Labels:

Thursday, June 07, 2007

Visual Assist

Labels:

Friday, January 19, 2007

TRACE Route utility

I am sure anyone who is at the least internet savvy, will be aware that to move data from one point say A to another point B across the Internet, it has to pass through a number of intermediary points say C, D,E.... But what many won't know is that your data is not transferred in one piece when it is sent over the net, rather, it is split into chunks of say 1500 bytes each, then each chunk is enclosed in what is known as a packet which contain some additional data such as the destination IP address and port number apart from some other details which provide the unique identity to the packet and finally it is sent across the net.

While the packets travel the path from point A to point B, each packet may take a different path depending upon diverse factors and eventually they are merged together in the same order at the receiving end to provide the document you sent in the first place.

The intermediate gateways through which the packets pass through before they reach the final destination are known as hops. So for data to travel from point A to point B on the net, it has to go through a number of hops.

Linux & Unix being network operating systems have a number of powerful tools which aid the network administrator to find out a wealth of data about their network and the Internet. One such tool is the ubiquitous traceroute.

The tool traceroute is available in all Unix and Linux distributions and is used to find out the potential bottlenecks in between your computer and a remote computer across the net. The usage of this tool is quite simple and is as follows:
# traceroute 
Usually you have to be root to run this tool as it resides in the /usr/sbin directory. But if you use the full path, then you can run this tool as a normal user as follows:
$ /usr/sbin/traceroute 

For example, this is the output I received when I ran a trace on the www.yahoo.com domain from my machine.
$/usr/sbin/traceroute www.yahoo.com

traceroute to www.yahoo.com (69.147.114.210), 30 hops max, 40 byte packets
1 10.2.71.1 (10.2.71.1) 21.965 ms 22.035 ms 22.111 ms
2 (ISP) (ISP gateway) 22.510 ms 25.716 ms 26.073 ms
3 61.246.224.209 (61.246.224.209) 69.212 ms 59.778 ms 63.334 ms
4 59.145.6.1 (59.145.6.1) 65.632 ms 64.750 ms 64.868 ms
5 59.145.11.69 (59.145.11.69) 63.562 ms 64.219 ms 63.742 ms
6 203.208.143.241 (203.208.143.241) 318.632 ms 307.733 ms 316.650 ms
7 203.208.149.25 (203.208.149.25) 317.534 ms 308.116 ms 307.507 ms
8 203.208.186.10 (203.208.186.10) 245.835 ms 247.878 ms 248.862 ms
9 so-1-1-0.pat1.dce.yahoo.com (216.115.101.129) 286.774 ms 289.702 ms so-1-1-0.pat2.dce.yahoo.com (216.115.101.131) 326.470 ms
10 ge-2-1-0-p141.msr1.re1.yahoo.com (216.115.108.19) 324.044 ms 324.497 ms 326.011 ms
11 ge-1-32.bas-a1.re3.yahoo.com (66.196.112.35) 333.479 ms 333.019 ms ge-1-41.bas-a2.re3.yahoo.com (66.196.112.201) 292.967 ms
12 * * *
13 * * *
14 * * *
15 * * *
.
. //Truncated for brevity
.
29 * * *
30 * * *
As you can see from the output spewed by traceroute, it defaults to a maximum of 30 hops. The first line of the output gives the IP address of the yahoo.com domain which is 69.147.114.210, the maximum number of hops traceroute will keep track of the packets before it reaches the destination and the size of the packets which is 40 bytes.

The next 30 or so lines show the IP address or domain name of the gateway servers through which the packets pass through as well as the time in milli-seconds of the ICMP TIME_EXCEEDED response from each gateway along the path to the host. traceroute program utilizes the IP protocol's time to live (TTL) field. By default, it starts with a TTL value of 1 but this value can be changed with the -f option.

Now lets take a closer look at the output of traceroute to the yahoo.com domain as shown in the listing above. As you can see, the second hop is always to ones ISP's gateway as shown by the address (I have removed the address of my ISP's gateway). On the same line, followed by the IP address, there are three time values in milli seconds. There are three values because traceroute by default sends simultaneously, 3 packets of 40 bytes each. And the three time values are the time taken to send the packets and receive a ICMP TIME_EXCEEDED response from the gateway. Put another way, these three values are the round trip times of the packets. So for the three packets to reach my ISP's gateway, and get an echo back, it takes 22.510 milli seconds, 25.716 ms and 26.073 ms respectively as is displayed by the values of the 2nd hop.

Lets look at the 5th and 6th hop in the output above. If you compare the times, you will find a drastic increase in the times. If it is 63.562 ms for the 5th hop, it is 318.632 ms for the 6th hop. This is because up till the fifth hop, the gateway servers were within the Indian sub-continent itself. Where as the gateway of the 6th hop is in Singapore and so it takes that much more time to get a reply. Generally, smaller numbers mean better connections.

Check out the 11th hop. It shows two domains with one domain for the first two packets and a different domain for the third packet.

And from 12th hop onwards I get a series of time outs as shown by the asterisks. So my trace of the www.yahoo.com domain resulted in a series of time outs and did not complete. The problems could be one of the following:
  • The network connection between the server on the 11th hop and that on 12th hop is broken.
  • The server on the 12th hop is down.
  • Or there is some problem with the way in which the server on the 12th hop has been setup.
To make sure, I did a ping of the www.yahoo.com domain and as expected, I received 100% packet loss as shown by the ping output below.
$ ping -c 2 www.yahoo.com
PING www.yahoo-ht2.akadns.net (69.147.114.210) 56(84) bytes of data.

--- www.yahoo-ht2.akadns.net ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1009ms
Usually this means I will not be able to access the concerned domain. But in yahoo.com's case, I was able to access the domain without any problem as in all probability, their website is mirrored across a number of servers spread across the world. So if one server is down, the query is re-routed to the next nearest server.

traceroute is a very useful tool to pin-point where the error occurs on the internet. It can also be used to test the responsiveness of a domain or server. For example, If your route to a server is very long (takes over 25 hops), performance is going to suffer. A long route can be due to less-than-optimal configuration within some network along the way.

Similarly, if you see in a trace output, a large jump in latency (delay) from one hop to the next, that could indicate a problem. It could be a saturated (overused) network link; a slow network link; an overloaded router; or some other problem at that hop. It can also indicate a long hop, such as a cross-country link or one that crosses an ocean (compare the timing of the 5th and 6th hop in the yahoo.com trace output above).

Wednesday, January 10, 2007

VC80

I'm happy to share some thoughts. Please consider the following to be my personal point of view after only a few days of testing. Other's opinions might differ.

Pro V8:
* Compiler works much faster
* new Debugger features:
.. the "inline quickwatch" when pointing at a structure or object in the source
.. the ability to show the contents of std classes in a readable way.
* Intellisense improvements:
.. does not crash occasionally on large projects
.. does not bail out occasionally on large projects refusing to show anything until VC is restarted.
.. does include system headers like e.g. DirectX
.. does show global variables in namespaces.
.. seems to handle template classes way better
.. marks source that is not compiled due to preprocessor magic. Very helpful, especially when integrating third party source.

Contra V8:
* "Static" intellisense.

Seems like the precise and more complete infos intellisense can offer now came with a price: most of the changes on a source or header arent detected until the next successful build or a VC restart. I switched to a header, corrected a typo in a member var, switched back to implementation and it took about half an hour till intellisense recognised the new name. Adding a new function prototype to a header and then trying to implement it is killing intellisense: you're typing all the stuff completely on your own. After restarting VC and letting Intellisense update its database I tried to continue implementing that function, but it kept switching between the correct scope and "(Global scope)" in intervals of about 20 seconds, unable to show any member function or variable of that class. Intellisense is one of the most important features of VC in my opinion. And despite all the cool new features the new intellisense is just too unreliable for my taste. Especially when you're trying to actually produce new code, not just edit existing code.

Note: I left the function syntactically correct. I'm already used to always add a closing bracket as soon as I type an opening one and then inserting the actual content in the middle. It helps Intellisense2003 to keep track. It doesn't make any difference for Intellisense2005.

* Executable speed.

I haven't done any benchmark or anything else the audience might call "reliable". I just converted the project to VC8 and recompiled it. My personal statistic: Debug 14fps (vs 20 before), optimized Debug 50 fps (vs 90 in VC7), Release 90 fps (vs 115 before). As I said, the tests are not representative or valid for others, but they're important for me. At least the release configuration, the rest is a nice-to-have.

I read the forums here and found some hints, adding defines to silence all the "deprecated" warnings I don't care for and trying to regain some of the speed. Yet to no avail. It might be a degraded compiler, though, much like the "student" version of VC7, where some optimisations were lacking. And I'm pretty sure with some time and a close look at all the new options there might be some further improvement in reach.

* Some features lacking

That's not a point exactly as I compare VC8 Express Edition against the full blown VS2003 Enterprise Edition my boss paid for me. I never used most of the stuff included there anyways. But there are some small details that indeed do hurt when they're missing: Macro support for example. I haven't found a detailed list what exactly the differences are between the express version and the larger ones so I can only guess what else I'll be missing then.

In the end we decided to stay at VC7. We know what twists are needed to work around some of intellisense's problems, others we got used to. We're currently evaluating Visual Assist and if it proves to be useful it is as much of an improvement as I have ever hoped VC8 would bring.

I hope you read this statement as a personal opinion. It's just a hint what I expected to find in the new version and it is in no way meant to be a general judgement. No offence intended.