Wednesday, December 26, 2012

export the path of the proxy

$ export HTTP_PROXY=http://<proxy_user_id>:<proxy_password>@<proxy_server>:<proxy_port>
$ export HTTPS_PROXY=http://<proxy_user_id>:<proxy_password>@<proxy_server>:<proxy_port>

Wednesday, December 5, 2012

Printing using the CUPS

CUPS(common unix printing system)
For setting the printer using the command prompt:

1. copy the ppd file of the printer driver to the target. (like Hewlett-Packard-hp-LaserJet-4250.ppd)                     
2. Run (lpadmin -p laserjet -v socket://ip_address_of_printer -P printer_ppd_filename_with_path)
         (example : lpadmin -p laserjet -v socket://10.1.8.20 -P Hewlett-Packard-hp-LaserJet-4250.ppd)
3. Run (cupsaccept laserjet)
4. Run (cupsenable laserjet)
5. Run (lpoptions -d laserjet)
6. Run lpq command (for checking the status of printer)
7. Run lpstat -d (for checking the default printer )




[SOMETHING]-FILEFORMAT
        |
        |   [something]tops
        V
   APPLICATION/POSTSCRIPT
        |
        |   pstops
        V
   APPLICATION/VND.CUPS-POSTSCRIPT-----------+
                                             |
                    +------------------------v------------------------------+
                    |                                                       |
                    |       Ghostscript at work....                         |
                    |          (= a "postscript interpreter")               |
                    |                                                       |
                    | * * * * * * * * * * * * * * * * * * * * * * * * * * * |
                    |                        *                              |
                    |  run with commandline  *                              |
                    |  parameter             *                              |
                    |       "-sDEVICE=cups"  *                              |
                    |                        *                              |
                    |  called by pstoraster  *   run with commandline       |
                    | (CUPS standard filter) *   parameter                  |
                    |                        *        "-sDEVICE=[s.th.]"    |
                    +------------------v-----+                              |
                                       |     |                              |
   APPLICATION/VND.CUPS-RASTER <-------+     |                              |
        |                                    |   called by foomatic-rip     |
        |  rasterto[something]               | (openprinting.org extension)|
        |   (= "raster driver")              |                              |
        |                                    +----------------v-------------+
        |                                                     |
   SOMETHING-DEVICE-SPECIFIC <--------------------------------+
        |
        V
       backend

refer:
http://www.linuxfoundation.org/collaborate/workgroups/openprinting/database/cupsdocumentation

Tuesday, December 4, 2012

How to setup the build root for building any module for the target


These are the following steps:

A:  run the (make menuconfig) and set the following things

1.TARGET Architeccture-->ARM(little endian)
2.TARGET Architecute Variant----->cortex -A9
3.Target ABI--> EABI

4. TOOLCHAIN::::
--------------------------
4.1: Toolchain Type: External toolchain
4.2: Toolchain ---->Sourcery CodeBench ARM2010q1
4.3: Toolchain origin----> Pre-installed toolchain
4.4: toolchain path::---> /opt/arm2010q1

------------------------------------------
get the rootfs from the path:(/buildroot/output/images-->rootfs.tar)

If we wana see the libarary and all these things then we can see it (/buildroot/output/target/usr/lib)


commands:

1. make
2. make clean

Monday, September 24, 2012

some important commands

find . -name CVS -print | xargs rm -rf

It will delete the CVS directory form the current diroctory.

Wednesday, August 1, 2012

ar and tar command in unix

Working With ar command : Used for the create, modify, and extract from archives.

It is used ot combine objects to create an archive file with .a extension also known as library.

Example

Suppose you have 3 object file (hexa,limit,struct)

ar -cr lib.a hexa limit struct (It will create the lib.a archive )

ar -t lib.a (it will display the list of file in the lib.a archive)

ar -d lib.a hexa (It will delete the hexa object file from the lib.a archive)


ar -x lib.a hexa (It will extract a particular file from the archive ).

ar -r lib.a hexa (It will reinsert the file in the archive ).



-------------------------------------------

Tar commands used to store and extracts files form a tape or disk archive.

Example (suppose we have file hexa.c,limit.c,struct.c, header.c)

tar -cvf demo_AnyName.tar hexa.c limit.c struct .c header.c (It will create a tar ball which contain the above files.)

tar -xvf demo.tar (It Will extract the file from the demo.tar into the current directory)

tar -tvf demo.tar (It will list the file present in the demo.tar)

tar --delete -f demo.tar header.h (It will delete the header.h file from the demo.tar )

   
tar -rvf demo.tar header (It will append or add the file to the demo.tar if it is not present.)

Thursday, July 19, 2012

How to take backup and setup the CVS on Linux Machine

For Backup two ways:

1) Make a tar file and
2) use the rsysnc command
   rsync -rv user01@server01.comentum.com:Source path/ /Destination path/


 For Set up the CVS server :




 1) Install the CVS Server ( apt-get install cvs)
 2) Install the xinetd  (apt-get install xinetd)

 To set up xinetd create a file called "cvs" in "/etc/xinetd.d/" with the following content.

 service cvspserver
{
     port = 2401
     socket_type = stream
     protocol = tcp
     user = root
     wait = no
     type = UNLISTED
     server = /usr/bin/cvs
     server_args = -f --allow-root /usr/local/cvsroot pserver   //(If u want to change the path then replace /usr/locatl/cvsroot)
     disable = no
}

3)  Start the CVS server:

sudo /etc/init.d/cvsd start

 4) check weather the CVS server is running or not. (ps -def | grep cvsd)

 ** past the file if you took backup and update the path
 other wise you have to create a project directory and past it there.

 5) Now you have to add the users and group to the directory:

 From System->Administration->Users and Groups. Click Unlock button and enter the password, then click Manage Groups button. Highlight cvsd then click Properties and select the username from the Group Members list.
Create the user password for CVS login as shown below:
cvsd-passwd /var/lib/cvsd/myrepos username

6)  The syntax to login to the local CVS server is shown below:
cvs -d :pserver:username@localhost:/myrepos login
However, if the $CVSROOT is defined just simply enter "cvs login" instead. Add following line in ~/.bashrc is the easiest way to add it to the environment.

export CVSROOT=:pserver:username@localhost:/myrepos

7) cvs login
8) Enter the password



Now you have to check in and check out

9) For Check out the command is (cvs co Directory Name)
10) For Check In the command is (cvs ci -m "new version" filename)
cvs commit
11) For Adding the Directory in CVS (cvs  -R add "Test file" *)





At last if you are facing some problem then give the cvs directory permission
chmod 775 -R cvs
chgroup src cvs
chmod g+s cvs


-------------------------------------------------------------------

Add the repository to the cvs with sub folder

cvs import releases Sirius releases 

releases --- main folder name
Sirius  ---- name any




refer : https://help.ubuntu.com/10.04/serverguide/cvs-server.html

Wednesday, July 11, 2012

Keyword imp on UNIX(alarm,strtol)

The alarm API function can be very useful to time out other functions. The alarm function works by raising a SIGALARM signal once the number of seconds passed to alarm has expired. The function prototype for the alarm is :

**unsigned int alarm ( unsigned int secs);

------------------------------------------------------------------------------------------------------------------------------------
convert string to long integer .

This function first discards as many whitespace characters as neccessary until the first non -whitespace character is found.
Finally a pointer to the first character following the interger representation in *str is stored in the object pointed by **endptr.

*** long int strtol ( const char * str, char ** endptr, int base );

parameters are:

str--- string that contain value like(hexadecimal, ocatadecimal, binary etc..)

endptr-- Reference to an object of type char*, whose value is set by the function to the next character in str after the numerical value.
This parameter can also be a null pointer, in which case it is not used.

base-- it is either 10,2,8 etc..

Monday, July 2, 2012

Making the SPEC file in the linux

# This is a spec file for starlight audio module

%define name      rpmdemo
%define buildroot %{_topdir}/%{name}-0.1
%define buildarch noarch
%define prefix    /opt/starlight
%define _binaries_in_noarch_packages_terminate_build 0
%define _source_filedigest_algorithm 8
%define _binary_filedigest_algorithm 8

BuildRoot:      %{buildroot}
Summary:        Starlight audio module
License:        GPL
Name:           %{name}
Version:        0.1
Release: 0.1
Source:         %{name}-0.1.tar.gz
Prefix:         %{prefix}
Group:          Development/Tools
BuildArch:  %{buildarch}

%description
Starlight audio module

%prep
%setup -q

%install
mkdir -p %{buildroot}%{prefix}/bin/
cp -r %{_topdir}/BUILD/%{name}-0.1/bin/hello %{buildroot}%{prefix}/bin/

%files
%defattr(-,root,root)
%{prefix}/bin/hello

~                                                                                                                                                                      
"rpmdemo.spec" 35L, 751C                                                                                                                              1,1           All


 *****************************************************************
2nd SPEC file for .zip contenet


# This is a spec file for starlight audio module

%define name     lcd
%define buildroot %{_topdir}/%{name}-0.1
%define buildarch noarch
%define prefix    /opt/starlight
%define _binaries_in_noarch_packages_terminate_build 0
%define _source_filedigest_algorithm 8
%define _binary_filedigest_algorithm 8

BuildRoot:      %{buildroot}
Summary:        Starlight audio module
License:        GPL
Name:           %{name}
Version:        0.1
Release: 0.1
Source:         %{name}-0.1.tar.gz
Prefix:         %{prefix}
Group:          Development/Tools
BuildArch:  %{buildarch}

%description
Starlight audio module

%prep
%setup -q

%install
mkdir -p %{buildroot}%{prefix}/
cp -r %{_topdir}/BUILD/%{name}-0.1.tar.gz   %{buildroot}%{prefix}/

%files
%defattr(-,root,root)
%{prefix}/lcd-0.1.tar.gz

**********************************************************************

rpm file is used for the installing the package in the ubuntu O.S

http://rpm5.org/docs/rpm-guide.html#ch-creating-rpms
http://www.rpm.org/max-rpm/ch-rpm-basics.html

HOW TO MAKE THE WLAN(WIRELESS LAN UP IF IT IS NOT UP IN LINUX)

check the cmd :: ifconfig -a
a) if the wlan is showing then ok other wise you have to follow the following steps :

i) cmd :: depmod
ii) cmd :: modprobe wl12zz_sdio
iii) cmd ::modprobe  wl12xx
now again check the ifconfig -a if wlan0 is showing then ok other wise go to the next step:
GO to cd/lib/modules/3.03.3/Kernel/drivers/net/wore;ess/wl12xx
after that we have to go the a(i) and a(ii) again

If they assign the ip address all these things then ok otherwise we have to assign it manually, for that you have to follwo these steps:

I) ifconfig wlan0 19.68.5.0 netmask 255.255.255.0
II) also we have to give the essid first by following that command

ifconfig wlan0 essid "st"


If wifi is up then u can scan by using the following command:

cmd :: iwlist wlan0 scan

cmd :: iwconfig

**** copy the wli2 thing to Rootfs if it is not present .
copy form 3.03.3 lib module to ---> Rootfs

Thursday, June 28, 2012

RunLevel in Linux

There are differences in the runlevels according to the operating system. seven runlevels are supported in the standard Linux kernel
The Runlevels in System  describe certains states :: For Example

a) Runlevel 0 : Halt(No activity, the system can be safely powered down)
b) Runlevel 1 : Single user mode(raerly used )
c) RL 2 : Multiple users , no NFS(network filesystem),also used rarely.
d) RL 3 : Multiple users, command line (i.e all text mode ) interface the standard runlevel for most linux based server hardware.
e) RL 4 : User define
f) RL 5 : Multiple users, GUI ; the standared runlevel for most linux based desktop system.
g) Renlevel 6 : Reboot


Linux change Run level command :

who -r // It give the current runlevel

init 1  //It is used for to change the runlevels.


Monday, June 25, 2012

Create RPM Files

 STEPS FOR CREATING THE RPM FILES:::

A) first we have to create the five file in any directory i.e::::
mkdir -p BUILD  RPMS  SOURCES  SPECS  SRPMS

B) create your binary file of any program like i created one program of hello world!! and then compile it then it become the binary file.
copy that binary file into that path  SOURCE->rpmdemo-0.1 ->bin

then make the tar file of the rpmdemo-0.1.tar.gz.

Then make the spec file and past it into the SPECS folder .

Some changes in the spec file are : change the name of rpm of your output(binary of the hello world.).

then compile it by that command :::

rpmbuild -v -bb --clean SPECS/rpmdemo.spec --define "_topdir${PWD}"


----------------------------------------------------------------------------------------------------------
Only .tar.gz, .tar, .tgz and .tar.Z can be used with RPM.


************************************************************************
 Some useful links::
http://home.gna.org/subtitleeditor/

Friday, June 15, 2012

FREAD AND FWRITE function

IT IS MAINLY USED FOR READING AND WRITING THE DATASTRUCTURE:


FREAD and FWRITE:

#include <stdio.h>
size_t fread(vid *ptr, size_t size, size_t nmemb, FILE *stream);
size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE * stream);

The ffread () reads nmemb elements of data , each size bytes long,from the stream pointed to by stream, stroring them at the location given by ptr.


The Function fwrite () writes nmemb element of data, each size bytes long to stream pointed to by stream ,obtaining them from the location given by ptr.

Code for creating the cscope.out file

cscope package is used for the searching in the whole project.

For making the file we have to ::


    find . -name '*.[ch]' > cscope.files //It include the .c file and .h file both
 
cscope -b
    CSCOPE_DB=cscope.out; export CSCOPE_DB

thats it for that we have to alredy install the cscope package in the system.
 
commands ::

** uname- It is used for the get all the information of the system like the versoin , processor, releasel, O.S,Node,etc.

ex: uname -a : it will give all the information of the system.
    uname -v,r,n,s,o etc..
  
 
 
-----------------------------------------------------------------
                      links Imp
1.http://principialabs.com/beginning-ssh-on-ubuntu/ (for the ssh login remotly)

MEMSET function and MEMCPY function and FLUSH function

MEMSET - It fill the memory with a constant byte
#include <string.h>
void *memset (void *s , int c , size_t n);

The memset () function fills the first n bytes of the memory area pointed to by s with the constant byte c.

--It return a pinter to the memory area s.
------------------------------------------------------------------------------------------------------------
MEMCPY-- copy memory area

#include <string.h>
void *memcpy( void *dest, const void *src , size_t n);

The memcpy() function copies n bytes from memory area src to memory area dest. the memory areas shoul not overlap. Use memmove if memory areas do overlap.
***************************************************************************
SYNC-- flush file system buffers
sync();

Force changed blocks to disk, update the super block.

******************************************************************************
FFLUSH-- flush a stream

#include <stdio.h>
int fflush (FILE *stream );

Description: It forces a write of all user-space buffered data for the given output or update stream via the stream's underlying write function.
 

Tuesday, June 5, 2012

Signal in Unix

The signal API function allows us to install a signal handler for a process.
the prototype of signal api function is defined as :

sighandler_t signal (int  signum, sighandler_t handler);
where signalhandler_t typedef is :
typedef void (*sighandler_t)(int);

signal handlers for a process can be of three different types. they can be ignored (via SIG_ING), the default handler for the particular signal type (SIG_DEF) or a user defined handler (install via signal).

* The first group (terminate Ex. SIGHUP, SIGINT) lists the signal whose default action is to terminate the process. The second group (ignore EX. SIGCHLD, SIGCLD) list the signals for which the default action is to ignore the signal. The third group (core EX. SIGQUIT, SIGILL) list those signal whose action is to both terminate the process and perform a core dump (generate a core dump file) . and finally the fourth group (stop SIGABRT, SIGIOT, SIGBUS) stop the process (suspend , rather than terminate).

Wait in Unix

The purpose of the wait API function is to suspend the calling process until the child process exits or until a signal is delivered.
If the parent isn't currently waiting on the child to exit, the child exit , and the child process becomes a zombie process.

The prototype for the wait function is defined as :

pid_t wait(int *status)

The status variable returns status information about the child exit. This variable can be evaluated using a number of macros. Like :

1. WIFEXITED: Non zero if the child exited normally.
2.WEFSIGNALED : Return true if child exited due to a signal that wasn't caught by the child.
3. WTERMSIG : Return the signal number that caused the child to exit.
4.WEXITSTATUS : Return the exit status of the child .

Monday, June 4, 2012

Fork In Unix

The fork API function provides the means to create a new child subprocess form the existing parent process. some difference include is process id of parent and the child is different.

** File locks and signals that are pending to the parent are not inherited by the child process.

The prototype for the fork() function is :

pid_t fork(void);

** The fork call has a very unique structure in that the return value identifies the context in which the process is running . if the return value is Zero, then the current process is the newly created child process. If the return value is greater than Zero then the current process is Parent.


** EAGAIN AND ENOMEM: Both the errors arise from a lack of available memory.

** Rather than copy the page tables for the memory when the fork takes place, the parent and child share the same page tables but are not permitted to write to them . When a write takes place to one of the shared page tables, the page table is copied for the writing process so that it has its own copy. This is called the "copy- on - write".
Only as writes occur to the share data memory does the segregation(The action or state of setting someone or something apart from other people or things or being set apart)  of the page tables take place.

Tuesday, May 29, 2012

Signal Handling in Unix

A signal is fundamentally an asynchronous call back for processes in Linux. we can register to receive a signal when an event occurs for a process or register to ignore signals.

signal are an important topic here process management because they allow processes to communicate with one another. We can use Signal API function to register our handler.

Example:
#include <stdio.h>
#include <sys/types.h>
#include <signal.h>
#include <unistd.h>

void catch_ctlc( int sig_num)
{
printf("Caught control c.\n");
fflush(stdout);
return;
}
int main ()
{
signal (SIGINT,catch_ctlc);// Registering our call back function.
printf("\n Go ahead , make my day \n");
pause();//It suspended the process until a signal is received.

return 0;
}

Thursday, May 24, 2012

UNIX: Process

Process: A process is a program in execution .
process have 5 state:
1. New : The process is being created.
2. Ready: They have all the resources and ready to allocated to the processor.
3. Running : The process is executing the instructions.
4. Wait : The process is waiting for the i/o or other things.
5  Stop/Terminate : The process has finish the execution.

The process information are stored by the operating system in PCB(Process control Block), which contain the following blocks:


1. process status
2. process number
3. program counter
4. Register
5. Memory Limit
6. List of open files.

****************************************************************************
CONTEXT SWITCHING: Context Switching is the process of storing and restoring the state of CPU. It is essential feature of the multitasking operating system.

CPU Scheduler : Scheduler will select the process from ready processes in memory and will allocate the CPU to process.
Type of Scheduler:
1. short term scheduler
2. long term scheduler
3. medium Term scheduler.

************************************************************************************
PROCESS: When a sub process is created  ( via fork), a new child task is created with a copy of the memory used by the original parent task.
When a new task is created, the memory space used by the parent isn't actually copied to the child instead, both the parent and child reference the same memory space, with the memory pages marked as copy -on -write. When any of the process attempt to write to the memory, a new set of memory pages is created for the process that is private to it alone.

When the fork API function returns, the split occurs, and the return value from fork identifies in which context the process is running.
There is three possibilities from the return of the fork call. When the return value of the fork is greater than zero then we're in the parent context and the value represents the pid of the child . when the return value of the fork is zero the we're in the child context. finally any other value less than zero or negative represent the error.
************************************************************************************

 How to make the object of particular name rather than the a.out

cc -o objectname filename.c


***************************************************************************



 
----- kill -9 Processid (For killing the particular process )
----- Kill -STOP processid (To Stop the process id )
------ Kill - CONT procesid (To continue the process id where it will stop)


AGING: This starvation can be compensated for if the priorities are internally computed. suppose one parameter in the priority assignment function is the amount of time the process has been waiting.

THE LONGER A PROCESS WAITS, THE HIGHER ITS PRIORITY BECOMES .

Wednesday, April 11, 2012

Script Task in SSIS

Abstract
This article demonstrates creating a SQL Server Integration Services package that imports a text file into SQL Server database table using a Script Task component.
Requirements
Article
We begin by launching Microsoft Visual Studio and create a new Integration Services Project which is located under Business Intelligence Projects category. After you have assigned a project name, proceed to click and drag the Script Task in Control Flow pane of the package's toolbox. Right click the script task and click on "Edit" Under the Script Task Editor change the "ScriptLanguage" to "Microsoft Visual C# 2008".
In Project Explorer, ensure the following references are added:
Back to the code window, ensure that the following namespaces are declared:
After the above declarations, proceed to creating a new application instance:
Application selectSIFISO_app = new Application();
Create package:
Package sS_pkg = new Package();
Assign relevant package name and description:
sS_pkg.Name = "Load Flat File Source into OLE DB Destination Using C#";
sS_pkg.Description = "Programmatically create an SSIS 2008 package that loads a Flat File Source into OLE DB Destination Using Script Task's C# language";
Insert the Data Flow Task with appropriate name and some buffer space for processing of file (the last part is optional - you can also use default buffer allocation):
sS_pkg.Executables.Add("STOCK:PipelineTask");
TaskHost taskHost = sS_pkg.Executables[0] as TaskHost;
MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject;
taskHost.Name = "Dynamic Data Flow Task";
taskHost.Properties["DefaultBufferMaxRows"].SetValue(taskHost, "1000000");
Insert the Flat File connection:
ConnectionManager connectionManagerFlatFile = sS_pkg.Connections.Add("FLATFILE");
You can change this path depending on where you have stored the flat file (ensure you download the attached file, see "Requirements" section above):
connectionManagerFlatFile.ConnectionString = @"C:\Temp\flat_src.txt";
Assign name to the flat file connection:
connectionManagerFlatFile.Name = "TXT_FlatFile";
Indicate that the flat file is delimited:
connectionManagerFlatFile.Properties["Format"].SetValue(connectionManagerFlatFile, "Delimited");
Indicate whether the source file has column headings or not - in this case, our sample data has column headings, hence - true:
connectionManagerFlatFile.Properties["ColumnNamesInFirstDataRow"].SetValue(connectionManagerFlatFile, Convert.ToBoolean(true));
Get native Flat File connection:
RuntimeWrapper.IDTSConnectionManagerFlatFile100 connectionFlatFile = connectionManagerFlatFile.InnerObject as RuntimeWrapper.IDTSConnectionManagerFlatFile100;
Declare local string variable that will be used as part of reading the text file:
string line;
Determine the number of columns by reading the sample Flat File - line by line:

using (StreamReader file = new StreamReader(@"C:\Temp\flat_src.txt"))
{
 try
    {
    while ((line = file.ReadLine()) != null)
   {
     char[] delimiters = new char[] { '|' };
     string[] parts = line.Split(delimiters, StringSplitOptions.RemoveEmptyEntries);
     
        for (int i = 0; i < parts.Length; i++)
 {
    RuntimeWrapper.IDTSConnectionManagerFlatFileColumn100 flatFileCol 
    = connectionFlatFile.Columns.Add() as RuntimeWrapper.IDTSConnectionManagerFlatFileColumn100;
    sS_AssignColumnProperties(flatFileCol, parts[i], "|");
 }
     //Exit file after reading the first line
     break;
   }                
  }
 catch (Exception ex)
    {
     throw ex;
    }
 finally
    {
     file.Close();
    }
}
Edit the last Flat File column delimiter into NewLine instead of a Comma:
connectionFlatFile.Columns[connectionFlatFile.Columns.Count - 1].ColumnDelimiter = Environment.NewLine;
Insert Flat File source component:
IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New();
componentSource.Name = "FlatFileSource";
componentSource.ComponentClassID = "DTSAdapter.FlatFileSource";
Insert source design-time instance and initialise component:
CManagedComponentWrapper instanceSource = componentSource.Instantiate();
instanceSource.ProvideComponentProperties();
Set source connection:
componentSource.RuntimeConnectionCollection[0].ConnectionManagerID = connectionManagerFlatFile.ID; componentSource.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.ToConnectionManager90(connectionManagerFlatFile);
Reinitialize Flat File source metadata:
instanceSource.AcquireConnections(null);
instanceSource.ReinitializeMetaData();
instanceSource.ReleaseConnections();
Insert the SQL Server 2008 OLE-DB connection:
ConnectionManager connectionManagerOleDb = sS_pkg.Connections.Add("OLEDB");
connectionManagerOleDb.ConnectionString = string.Format("Provider=SQLOLEDB.1;Data Source={0};Initial Catalog={1};Integrated Security=SSPI;", "localhost", "AdventureWorks");
connectionManagerOleDb.Name = "OLEDB";
connectionManagerOleDb.Description = "OLEDB Connection";
Insert OLE-DB destination:
IDTSComponentMetaData100 componentDestination = dataFlowTask.ComponentMetaDataCollection.New(); componentDestination.Name = "OLEDBDestination";
componentDestination.Description = "OLEDB Destination for the Flat File data load";
componentDestination.ComponentClassID = "DTSAdapter.OLEDBDestination";
Insert destination design-time instance and initialise component:
CManagedComponentWrapper instanceDestination = componentDestination.Instantiate(); instanceDestination.ProvideComponentProperties();
Set destination connection:
componentDestination.RuntimeConnectionCollection[0].ConnectionManagerID = connectionManagerOleDb.ID; componentDestination.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.ToConnectionManager90(connectionManagerOleDb);
Indicates the name of the database object used to open a rowset: instanceDestination.SetComponentProperty("OpenRowset", "[dbo].[sS_flatfileLoad]");
Specifies the mode used to open the database:
instanceDestination.SetComponentProperty("AccessMode", 3);
Specifies options to be used with fast load. Applies only if fast load is turned on: instanceDestination.SetComponentProperty("FastLoadOptions", "TABLOCK,CHECK_CONSTRAINTS");
Indicates whether the values supplied for identity columns will be copied to the destination or not In this case, we have set this property to false:
instanceDestination.SetComponentProperty("FastLoadKeepIdentity", false);
Indicates whether the columns containing null willhave null inserted in the destination or not In this case, we have opted no to insert nulls:
instanceDestination.SetComponentProperty("FastLoadKeepNulls", false);
Specifies the column code page to use when code page information is unavailable from the data source In this case we used the default - 1252:
instanceDestination.SetComponentProperty("DefaultCodePage", 1252);
Specifies when commits are issued during data insertion In this case, we have opted for the default size which is set to 2147483647:
instanceDestination.SetComponentProperty("FastLoadMaxInsertCommitSize", 2147483647);
Indicates the number of seconds before a command times out In this case, we have opted for the default value of 0 which indicates an infinite time-out:
instanceDestination.SetComponentProperty("CommandTimeout", 0);
Indicates the usage of DefaultCodePage property value when describing the character data In this case, we have opted for the default value of false:
instanceDestination.SetComponentProperty("AlwaysUseDefaultCodePage", false);
Connect the Flat File source to the OLE DB Destination component: dataFlowTask.PathCollection.New().AttachPathAndPropagateNotifications(componentSource.OutputCollection[0] ,componentDestination.InputCollection[0]);
Get input and virtual input for destination to select and map columns:
IDTSInput100 destinationInput = componentDestination.InputCollection[0];
IDTSVirtualInput100 destinationVirtualInput = destinationInput.GetVirtualInput();
IDTSVirtualInputColumnCollection100 destinationVirtualInputColumns = destinationVirtualInput.VirtualInputColumnCollection;
Reinitialize the metadata, generating exernal columns from flat file columns:
instanceDestination.AcquireConnections(null);
instanceDestination.ReinitializeMetaData();
instanceDestination.ReleaseConnections();
Select and map destination columns:
foreach (IDTSVirtualInputColumn100 virtualInputColumn in destinationVirtualInputColumns)
{ // Select column, and retain new input column
IDTSInputColumn100 inputColumn = instanceDestination.SetUsageType(destinationInput.ID,destinationVirtualInput, virtualInputColumn.LineageID, DTSUsageType.UT_READONLY);
// Find external column by name
IDTSExternalMetadataColumn100 externalColumn = destinationInput.ExternalMetadataColumnCollection[inputColumn.Name];
// Map input column to external column
instanceDestination.MapInputColumn(destinationInput.ID, inputColumn.ID, externalColumn.ID);
}
Execute the package or disable the below code if you intend running the package later:
sS_pkg.Execute();
Finally, save the package - in this case, we have opted to save the package into file system:
selectSIFISO_app.SaveToXml(@"E:\newArticle.dtsx", sS_pkg, null); Dts.TaskResult = (int)ScriptResults.Success;
In addition to the above code, you will notice that some part of the code references to the below function. This function is used to assign DTS column properties:
private static void sS_AssignColumnProperties(RuntimeWrapper.IDTSConnectionManagerFlatFileColumn100 flatFileCol, string getColName, string getDelim) {
Assign delimiter:
flatFileCol.ColumnType = "Delimited";
flatFileCol.ColumnDelimiter = getDelim;
Indicate column data type - in this case, all the source columns will be set to String Data Type:
flatFileCol.DataType = RuntimeWrapper.DataType.DT_STR;
Indicate column width - in this case, width of all source columns will be set to a length of 100:
flatFileCol.ColumnWidth = 100;
Assign column name:
RuntimeWrapper.IDTSName100 columnName = flatFileCol as RuntimeWrapper.IDTSName100; columnName.Name = getColName.ToString(); 

Execute the ssis multiple ssis package using the T-Sql

If you want to execute a set of SSIS packages in SQL Server 2008 or 2005, you can do this using T-SQL. First you will  need a table with all of your package names on it. Then a While loop to execute each package.
Here is the example code:

Declare @FilePath varchar(2000)
Declare @cmd varchar(2000)
 
DECLARE @package_name varchar(200)
Declare @PackageCount int
Declare @X int
Set @X = 1
Set @PackageCount = (Select COUNT(*) from Packages)
set @FilePath = 'C:\Package Path'
While (@X <= @PackageCount)
Begin
 
    With PackageList as
    (
    Select PackageName, Row_Number() Over(Order by PackageName) as  Rownum
    From Packages
    )
    SELECT @package_name = PackageName
    FROM PackageList
    Where Rownum = @X
 
    select @cmd = 'DTExec /F "' + @FilePath + @Package_name + '"'
 
    print @cmd
   
    Set @X = @X + 1
   
    exec master..xp_cmdshell @cmd
 
 
End
In the new version of SSIS 2012 you will be able to launch packages with T-SQL Natively.

Thursday, February 16, 2012

How to Protect the Folder


  1. Create a new folder and name it whatever you would like.
  2. Open the folder, right-click on a blank area in it, then select New -> Text Document from the pop-up menu.
  3. Open the text file you just created by double-clicking it and copy/paste in the following text:
    cls
    @ECHO OFF
    title Folder Private
    if EXIST "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}" goto UNLOCK
    if NOT EXIST Private goto MDLOCKER
    :CONFIRM
    echo Are you sure you want to lock the folder(Y/N)
    set/p "cho=>"
    if %cho%==Y goto LOCK
    if %cho%==y goto LOCK
    if %cho%==n goto END
    if %cho%==N goto END
    echo Invalid choice.
    goto CONFIRM
    :LOCK
    ren Private "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}"
    attrib +h +s "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}"
    echo Folder locked
    goto End
    :UNLOCK
    echo Enter password to unlock folder
    set/p "pass=>"
    if NOT %pass%== PASSWORD_GOES_HERE goto FAIL
    attrib -h -s "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}"
    ren "Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D}" Private
    echo Folder Unlocked successfully
    goto End
    :FAIL
    echo Invalid password
    goto end
    :MDLOCKER
    md Private
    echo Private created successfully
    goto End
    :End
  4. In the above code, replace the key PASSWORD_GOES_HERE with the password you want to use to unlock the folder. For example if you want the password to be123456, the line should look like:
    if NOT %pass%== 123456 goto FAIL
  5. Save your new file in the .bat format with the complete file name being locker.bat. To do this, make sure to change the Save as type: to All Files (*.*).
  6. In the folder you created back in Step #1, double click the locker.bat file and there will now be a new folder named Private where you can put anything you want.
  7. Upon exiting, double click the locker.bat file again. It will prompt you to answer whether you want to lock your folder or not. Press Y and the private folder will disappear.
  8. In order to retrieve the Private folder, all you have to do is double click thelocker.bat file and enter the password which you set in Step #4 and the folder will appear again for you to access.
  9. That’s it!

Wednesday, January 25, 2012

Important Question


differenct b/w the subquery and join

why we use stored proc

filter index

staging server

Bcp-Bulk copy program (
links

http://www.codeproject.com/Articles/16922/SQL-Bulk-Copy-with-C-Net
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
http://msdn.microsoft.com/en-us/library/aa196743(v=sql.80).aspx
http://www.dotnetcurry.com/ShowArticle.aspx?ID=323
http://www.sqlteam.com/article/using-bulk-insert-to-load-a-text-file
)




**************************


Inside the function we can't handle the
exception and can't use the transactions.

by using the select statement we can use the function.

Inside the function we can't write the insert or update command.

Inside the function we can't call the stored Procedure.

Inside the Procedure the exception handling and transaction .
Procedure.




Monday, January 23, 2012

Difference Between Variable and Property:


Variables and Propertires both represent values that you can access.However, there are differences in storage and implementation.

Variables: A variable corresponds directly to a memory location. you define a variable with a single declaration statement. A variable can be a local variable, defined inside a procedure and available only within that procedure,
or it can be a member variable, defined in a module, class, or structure but not inside any procedure .
a member variable is also called a field.

Properties: A property is a data element defined on a module, class, or structure. you define a property with a code block between the Property and End Property Statements.
The code block contain the Get Procedure, a Set procedure, or both.
These procedures are called property procedures or Property accessors.



*****

Imp Difference B/W the Variable and Properties:

Point of Difference Variable Property

1. Declaration single declaration series of statements in a code block statement

2. Implementation Single storage      Executable code (property procedures)
                              location

3. storage Directly associated    typically has internal storage not available outside the
                         with variables           property's conaining class or module with variable's.
                                                        value property's value might or might  not exist as a stored element.
                     

4. Executable code none Must have at least one procedure

5. Read and write access read/Write or read only                 read/write, read-only, or write only

6.Custom actions (in addition  not possible )  can be performed as part of setting or
                                                                 retrieving  property value.to acception or returning value)




Examples:::



public class Car
{

    int speed; //Is this sufficient enough if Car will only set and get it.

    public Car(int initialSpeed)
    {
        speed = initialSpeed;
    }

    //Is this actually necessary, is it only for setting and getting the member
        //variable or does it add some benefit to it, such as caching and if so,
        //how does caching work with properties.
    public int Speed 
    {
        get{return speed;}
        set{speed = value;}
    }

        //Which is better?
        public void MultiplySpeed(int multiply)
        {
            speed = speed * multiply; //Line 1
            this.Speed = this.Speed * multiply; //Line 2

            //Change speed value many times
            speed = speed + speed + speed;
            speed = speed * speed;
            speed = speed / 3;
            speed = speed - 4;

        }
}




1 - Fields can’t be used in Interfaces
You can’t enforce the existence of a field in an object’s public contract through an interface. For properties though it works fine.
2 - Validation
While your application currently may not require any validation logic to set a particular value, changing business requirements may require inserting this logic later. At that point changing a field to a property is a breaking change for consumers of your API. (For example if someone was inspecting your class via reflection).
3 - Binary Serialization
Changing a field to a property is a breaking change if you’re using binary serialization. Incidentally, this is one of the reasons VB10’s auto-implemented properties have a “bindable” backing field (i.e. you can express the name of the backing field in code) – that way, if you change an auto-implemented property to an expanded property, you can still maintain serialization compatibility by keeping the backing field name the same (in C# you’re forced to change it because it generates backing fields with unbindable names).
4 - A lot of the .NET databinding infrastructure binds to properties but not fields
I’ve heard arguments on both sides as to whether or not that’s a good thing, but the reality is that’s the way it works right now. (Note from me: WPF bindings work on properties)
5 - Exposing a public field is an FxCop violation
For many of the reasons listed above :)
http://www.codinghorror.com/blog/2006/08/properties-vs-public-variables.html
http://blogs.msdn.com/b/vbteam/archive/2009/09/04/properties-vs-fields-why-does-it-matter-jonathan-aneja.aspx

Qualcomm Short Term

  113 is the SL. 1st Target by mid July.

Total Pageviews