Contents
Linux Concepts
GNU BinUtils
Libraries are of two types, static and dynamic: Dynamic libraries are linked at run-time; if a dynamic library is missing, the application won’t run.
In cases where we want a program to be “independent” and not require any other libraries to be installed on the system before it can run, we need to resolve external functions and variables at compile time, and copy them into the program binary. This removes the runtime dependency on the library. For this purpose, static libraries are created — archive files of one or more object files.
Utilities:
ar: It is used in building static libraries.It is used to create, extract and modify archives.
The ar
tool can archive binary files, and is used to create static libraries — and to create, modify and extract archive files
How is ar different from tar ?
ar
creates a symbol table inside the output file, whereas tar
doesn’t. ar
yields a collection of symbols, and tar
a collection of files.
$ gcc -c foo.c && gcc -c bar.c
$ ar -cvq libfoobar.a foo.o bar.o
foobar.h
#include<stdio.h>
#include"foobar.h"
int
main(
int
argc,
char
**argv){
return
0;
}
-L
option is for the library path, and -l
for the library name.lib
and the extension .a
or .so
(in case of a shared library). The output file a.out
, which will have the contents of the static library compiled into the binary. Now libfoobar.a
can even be deleted it will not impact program ouput as library is statically compiled in the programobjdump: this is the most important binary tool; it can be used to display all the information in an object binary file.This tool generates the assembly code:
$ objdump -S .
/test
> asm
you can easily understand the instructions that are generated using objdump.
strings: Lists all the printable strings in a binary file.
nm: lists the symbols defined in the symbol table of an object file.
Sample output of nm command:
$ nm .
/test
08049764 A __bss_start
0804975c D __data_start
08048564 R const_int
08049778 B global_int1
08049760 D global_int2
0804976c B global_string
08048476 T main
U [email protected]@GLIBC_2.0
U
printf
@@GLIBC_2.0
080483f4 T test1
0804841d T test2
08048459 T test3
main
, test1
, test2
and test3
with a T
preceding them; T
stands for the text section, in which all these functions are.Next, the global and static variables: global_string
and global_int1
are preceded with B
, while we have R const_int
and D global_int2
. This is because the data section is divided into two further sections: Uninitialised Data or BSS (Block Start by Symbol), and Initialised Data. Both global_string
and global_int1
are declared but not initialised, so they are in BSS (B
);global_int2
has been initialised, and is in D
, the Initialised Data section.
The storage of const_int
is interesting. We declared it as a const
variable, whose value won’t change throughout the program — a read-only variable. Thus, it is stored in R
, the read-only data section.
ldd: lists the shared libraries on which the object binary is dependent.
strip: deletes the symbol table information.When you develop code, you need the symbols in it for debugging — but when you deploy it, there is no sense leaving the symbols in the executable file and wasting precious kilobytes of memory. So strip these symbols before deploying.
size
This command gets the sizes of different memory sections of an ELF file (executable):
size a.out
text data bss dec hex filename
1192 272 24 1488 5d0 a.out
Here, dec
is the total size of all sections (text
+ data
+ bss
).
Linux provides system calls, such as dlopen, dlsym and dlclose, that can be used to load a shared object, to look up a symbol in that shared object and to close the shared object, respectively. On Windows, LoadLibrary and GetProcAddress functions replace dlopen and dlsym, respectively.
1. It saves space on the system where the program is installed. If you are installing 10 programs, and they all make use of the same shared library, then you save a lot of space by using a shared library. If you used a static archive instead, the archive is included in all 10 programs and thus copied 10 times.
2.Users can upgrade the libraries without upgrading all the programs that depend on them.you don’t have to re-link all the programs the way you do with a static archive.
3.The fact that an upgrade to a shared library affects all programs that depend on it can be a disadvantage.
4. Won’t work if you don’t know where the libraries are going to end up. And asking your users to set LD_LIBRARY_PATH means an extra step for them. Because each user has to do this individually, this is a substantial additional burden.
Creation of core dump files
Below are the steps to manually create a core file from a running process.
Before creating a core file you should check your user limits settings to ensure that core files can be created.
[[code]]czozMTpcIlt0YXJyeUB0YXJyeSB+XSQgdWxpbWl0IC1jDQowDQpcIjt7WyYqJl19[[/code]]
The above setting disables the creation of core files. This setting is a size limit to the core file, if it is 0 then it cannot create a core file. You can also change this setting by running the following.
[[code]]czo3NzpcIlt0YXJyeUB0YXJyeSB+XSQgdWxpbWl0IC1jIHVubGltaXRlZA0KW3RhcnJ5QHRhcnJ5IH5dJCB1bGltaXQgLWMNCnVubGl7WyYqJl19bWl0ZWQNClwiO3tbJiomXX0=[[/code]]
It is important that you do this as the user the application runs as and before you start the application in the same session. This setting is inherited by the application, so what ever the ulimit is set as before starting the application is what the ulimit setting will be for the application (unless a start script changes it).
After setting ulimit you can create a core file by using kill -3
which will send the application a SIGQUIT signal.
Some items to watch out for with this is to ensure that the core file does not fill up the filesystem.
Controlling stack allocation
When you declare variables in programs the kernel allocates space for their data in the stack. You can tell the kernel to limit how much space in the stack (or the heap, for that matter) any given program can use, that way one program can’t just take up the whole stack. If there was no limit on how much of the stack a program could use up, bugs that would normally cause a program to crash would instead crash the entire system. The kernel crashing a program that goes above allocated stack space is called a “stack overflow”.
One of the most common bugs with the stack is excessive or infinite recursion. Since each new call to a function causes all of it’s variables to be placed on the stack, non-tail optimized recursive programs can quickly deplete the stack space allocated to a process by the kernel.
Running out of stack space is traditionally a very scary thing, as it can be used in something called “stack smashing”, or stack buffer overflow. This occurs when a malicious user intentionally causes a stack overflow to change the stack pointer to execute arbitrary instructions of their own, instead of the instructions in your own code.
As far as performance, there should be no impact whatsoever. If you are hitting your stack limit via recursion raising the stack size is probably not the best solution, but otherwise it isn’t something you should have to worry about. If a program absolutely must store massive amounts of data it can use the heap instead.
This stack size can also be controlled with option -s in ulimit command.
Link explaining other ulimit arguments
nohup command
Nohup is a unix command, used to start another program, in such a way that it does not terminate when the parent process is terminated. This is accomplished by ignoring the signal ‘SIGHUP’. This is what distinguishes nohup from simply running the command in the background with ‘&’.
The first of the commands below starts the program myprogram
in the background in such a way that the subsequent logout does not stop it.
$ myprogram & $ exit
Note that these methods prevent the process from being sent a ‘stop’ signal on logout, but if input/output is being received for these standard I/O files (stdin, stdout, or stderr), they will still hang the terminal.[
nohup is often used in combination with the nice command to run processes on a lower priority.
$ nohup nice abcd &
Get 5 minutes latest log from large log files:
#!/bin/bash
#To get previous 5 minutes logs from now(IST)
d1=$(date –date=”-5 min” +’%Y-%m-%d %H:%M:%S.%3N’)
d2=$(date +’%Y-%m-%d %H:%M:%S.%3N’
awk -v d1=”$d1″ -v d2=”$d2″ ‘$0 > d1 && $0 < d2 || $0 ~ d2’ /var/log/<file_name > > /var/log/$<new_file_name>
Resolving technical problems:
Solve your technical problems instantly
We provide Remote Technical Support from Monday to Sunday, 7:00PM to 1:00 AM
Mail your problem details at [email protected] along with your mobile numberand we will give you a call for further details. We usually attend your problems within 60 minutes and solve it in maximum 2 days.
I genuinely value your piece of work, Great post.