Computer Programmer

A programmer is someone who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (Lisp, Java, Delphi, C++, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming; for this reason, the term programmer is sometimes considered an insulting or derogatory oversimplification of these other professions. This has sparked much debate amongst developers, analysts, computer scientists, programmers, and outsiders who continue to be puzzled at the subtle differences in these occupations.
Those proficient in computer programming skills may become famous, though this regard is normally limited to software engineering circles. Ada Lovelace is popularly credited as history's first programmer. She was the first to express an algorithm intended for implementation on a computer, Charles Babbage's analytical engine, in October 1842. Her work never ran, though that of Konrad Zuse did in 1941. The ENIAC programming team, consisting of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman were the first working programmers.
International Programmers' Day is celebrated annually on January 7.
Nature of the work
Computer programmers write, test, debug, and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.
Programmers work in many settings, including corporate information technology departments, big software companies, and small service firms. Many professional programmers also work for consulting companies at client' sites as contractors. Licensing is not typically required to work as a programmer, although professional certifications are commonly held by programmers. Programming is widely considered a profession (although some authorities disagree on the grounds that only careers with legal licensing requirements count as a profession).
Programmers' work varies widely depending on the type of business they are writing programs for. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Although simple programs can be written in a few hours, programs that use complex mathematical formulas whose solutions can only be approximated or that draw data from many existing systems may require more than a year of work. In most cases, several programmers work together as a team under a senior programmer’s supervision.
Programmers write programs according to the specifications determined primarily by more senior programmers and by systems analysts. After the design process is complete, it is the job of the programmer to convert that design into a logical series of instructions that the computer can follow. The programmer codes these instructions in one of many programming languages. Different programming languages are used depending on the purpose of the program. COBOL, for example, is commonly used for business applications which are run on mainframe and midrange computers, whereas Fortran is used in science and engineering. C++ is widely used for both scientific and business applications. Java, C# and PHP are popular programming languages for Web and business applications. Programmers generally know more than one programming language and, because many languages are similar, they often can learn new languages relatively easily. In practice, programmers often are referred to by the language they know, e.g. as Java programmers, or by the type of function they perform or environment in which they work: for example, database programmers, mainframe programmers, or Web developers.
When making changes to the source code that programs are made up of, programmers need to make other programmers aware of the task that the routine is to perform. They do this by inserting comments in the source code so that others can understand the program more easily. To save work, programmers often use libraries of basic code that can be modified or customized for a specific application. This approach yields more reliable and consistent programs and increases programmers' productivity by eliminating some routine steps.

Testing and debugging

Programmers test a program by running it and looking for bugs. As they are identified, the programmer usually makes the appropriate corrections, then rechecks the program until an acceptably low level and severity of bugs remain. This process is called testing and debugging. These are important parts of every programmer's job. Programmers may continue to fix these problems throughout the life of a program. Updating, repairing, modifying, and expanding existing programs sometimes called maintenance programmer. Programmers may contribute to user guides and online help, or they may work with technical writers to do such work.
Certain scenarios or execution paths may be difficult to test, in which case the programmer may elect to test by inspection which involves a human inspecting the code on the relevant execution path, perhaps hand executing the code. Test by inspection is also sometimes used as a euphemism for inadequate testing. It may be difficult to properly assess whether the term is being used euphemistically.
Application versus system programming
Computer programmers often are grouped into two broad types: application programmers and systems programmers. Application programmers write programs to handle a specific job, such as a program to track inventory within an organization. They also may revise existing packaged software or customize generic applications which are frequently purchased from independent software vendors. Systems programmers, in contrast, write programs to maintain and control computer systems software, such as operating systems and database management systems. These workers make changes in the instructions that determine how the network, workstations, and CPU of the system handle the various jobs they have been given and how they communicate with peripheral equipment such as printers and disk drives.

 Types of software

Programmers in software development companies may work directly with experts from various fields to create software — either programs designed for specific clients or packaged software for general use — ranging from computer and video games to educational software to programs for desktop publishing and financial planning. Programming of packaged software constitutes one of the most rapidly growing segments of the computer services industry.
In some organizations, particularly small ones, workers commonly known as programmer analysts are responsible for both the systems analysis and the actual programming work. The transition from a mainframe environment to one that is based primarily on personal computers (PCs) has blurred the once rigid distinction between the programmer and the user. Increasingly, adept end users are taking over many of the tasks previously performed by programmers. For example, the growing use of packaged software, such as spreadsheet and database management software packages, allows users to write simple programs to access data and perform calculations.
In addition, the rise of the Internet has made Web development a huge part of the programming field. More and more software applications nowadays are Web applications that can be used by anyone with a Web browser. Examples of such applications include the Google search service, the Hotmail e-mail service, and the Flickr photo-sharing service.


What is Program....


An organized list of instructions that, when executed, causes the computer to behave in a predetermined manner. Without programs, computers are useless.
A program is like a recipe. It contains a list of ingredients (called variables) and a list of directions (called statements) that tell the computer what to do with the variables. The variables can represent numeric data, text, or graphical images.
There are many programming languages -- C, C++, Pascal, BASIC, Java, FORTRAN, COBOL, and LISP are just a few. These are all high-level languages. One can also write programs in low-level languages called assembly languages, although this is more difficult. Low-level languages are closer to the language used by a computer, while high-level languages are closer to human languages.
Eventually, every program must be translated into a machine language that the computer can understand. This translation is performed by compilers, interpreters, and assemblers.
When you buy software, you normally buy an executable version of a program. This means that the program is already in machine language -- it has already been compiled and assembled and is ready to execute.


Various forms of Monitors



Cathode Ray Tube

Definitions

  • A cathode is a terminal or electrode at which electrons enter a system, such as an electrolytic cell or an electron tube.
  • A cathode ray is a stream of electrons leaving the negative electrode, or cathode, in a discharge tube (an electron tube that contains gas or vapor at low pressure), or emitted by a heated filament in certain electron tubes.
  • A vacuum tube is an electron tube consisting of a sealed glass or metal enclosure from which the air has been withdrawn.
  • A cathode ray tube or CRT is a specialized vacuum tube in which images are produced when an electron beam strikes a phosphorescent surface.
Besides television sets, cathode ray tubes are used in computer monitors, automated teller machines, video game machines, video cameras, oscilloscopes and radar displays.
The first cathode ray tube scanning device was invented by the German scientist Karl Ferdinand Braun in 1897. Braun introduced a CRT with a fluorescent screen, known as the cathode ray oscilloscope. The screen would emit a visible light when struck by a beam of electrons.
In 1907, the Russian scientist Boris Rosing (who worked with Vladimir Zworykin) used a CRT in the receiver of a television system that at the camera end made use of mirror-drum scanning. Rosing transmitted crude geometrical patterns onto the television screen and was the first inventor to do so using a CRT.
Modern phosphor screens using multiple beams of electrons have allowed CRTs to display millions of colors.

LCD

Liquid crystal display (LCD) is a thin, flat panel used for electronically displaying information such as text, images, and moving pictures. Its uses include monitors for computers, televisions, instrument panels, and other devices ranging from aircraft cockpit displays, to every-day consumer devices such as video players, gaming devices, clocks, watches, calculators, and telephones. Among its major features are its lightweight construction, its portability, and its ability to be produced in much larger screen sizes than are practical for the construction of cathode ray tube (CRT) display technology. Its low electrical power consumption enables it to be used in battery-powered electronic equipment. It is an electronically-modulated optical device made up of any number of pixels filled with liquid crystals and arrayed in front of a light source (backlight) or reflector to produce images in color or monochrome. The earliest discovery leading to the development of LCD technology, the discovery of liquid crystals, dates from 1888.[1] By 2008, worldwide sales of televisions with LCD screens had surpassed the sale of CRT units.

Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarizing filters, the axes of transmission of which are (in most of the cases) perpendicular to each other. With no actual liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer.

The surface of the electrodes that are in contact with the liquid crystal material are treated so as to align the liquid crystal molecules in a particular direction. This treatment typically consists of a thin polymer layer that is unidirectionally rubbed using, for example, a cloth. The direction of the liquid crystal alignment is then defined by the direction of rubbing. Electrodes are made of a transparent conductor called Indium Tin Oxide (ITO).

Before applying an electric field, the orientation of the liquid crystal molecules is determined by the alignment at the surfaces of electrodes. In a twisted nematic device (still the most common liquid crystal device), the surface alignment directions at the two electrodes are perpendicular to each other, and so the molecules arrange themselves in a helical structure, or twist. This reduces the rotation of the polarization of the incident light, and the device appears grey. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray.

The optical effect of a twisted nematic device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, these devices are usually operated between crossed polarizers such that they appear bright with no voltage (the eye is much more sensitive to variations in the dark state than the bright state). These devices can also be operated between parallel polarizers, in which case the bright and dark states are reversed. The voltage-off dark state in this configuration appears blotchy, however, because of small variations of thickness across the device.

Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided either by applying an alternating current or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field).

When a large number of pixels are needed in a display, it is not technically possible to drive each directly since then each pixel would require independent electrodes. Instead, the display is multiplexed. In a multiplexed display, electrodes on one side of the display are grouped and wired together (typically in columns), and each group gets its own voltage source. On the other side, the electrodes are also grouped (typically in rows), with each group getting a voltage sink. The groups are designed so each pixel has a unique, unshared combination of source and sink. The electronics, or the software driving the electronics then turns on sinks in sequence, and drives sources for the pixels of each sink.
 

Gas Plasma Displays 
An overview of plasma displays


Gas plasma technology is a new way to build video and computer monitors. Essentially plasma units have the brightness and look of a CRT monitor, but they offer a much larger image and are thin enough and light enough to hang on any wall. This combination makes them ideal where lighting conditions would favor a monitor, but audience size indicates a projector. Like LCD displays, plasma monitors do not exhibit the distortion and loss of clarity in the corners inherent to CRTs.
How do plasma monitors work?
Plasma monitors work much like CRT monitors, but instead of using a single CRT surface coated with phosphors, they use a flat, lightweight surface covered with a matrix of millions of tiny glass bubbles, each having a phosphor coating. These phospors are caused to glow in the correct pattern to create an image.
What are the advantages of plasma?
Plasma monitors have several advantages over CRT-based monitors:
  • Thin and lightweight: at only 4" - 6" thick and about 60-100 lbs., they’re easy to hang on any wall.
  • Very bright: less sensitive to ambient light than most LCD projectors, plasma monitors have the brightness and contrast of CRT-based sets.
  • 160° viewing cone: ideal when your room is wide and people may view the monitor from farther off-axis than normal.
  • Stable and distortion-free: unaffected by magnetic fields; useful in many applications where CRT monitors or LCD and CRT projectors are problematic. Entire image always in perfect focus, not just in the center, but all the way to the corners.
  • Look and feel: plasma somehow looks different--better--than monitors and projectors alike. It's hard to quantify that difference, but most people would say they have more depth and warmth than other types media. They look very, very good.
What are the disadvantages of plasma?
This new technology has several disadvantages worth mentioning.
  • Cost: plasma is expensive. For that reason alone, plasma is not for everyone. But prices are coming down, as they do for most new technologies.
  • More susceptible to burn-in than CRT monitors. It's not a good medium on which to display a company logo for two or three hours at a time. But with the appropriate precautions, and in some situations a screen saver, you should not expect problems.
  • Resolution restrictions: plasma is subject to the same type of resolution problems as LCD or DLP projectors. You'll get the best images when the resolution of your source matches the "true" resolution of the monitor. But, as with LCD, the monitors will incorporate compression or expansion circuitry to automatically resize other resolution sources to match their native resolution, and most people will be very happy with the result. Still, if sharpness is critical for your application and you'll be using a variety of computer sources, you may be better off with a CRT-based unit.
  • Doesn't travel well: plasma is not portable. These monitors weigh 60 - 100 pounds and they don't do well if you drop them. If you want to travel with a plasma monitor, plan to invest in a good shipping case.
There's one other rumored "problem" with plasma that turns out not to be true. It has been said by some that plasma units do not have a long lifespan. Actually, the estimated life span for plasma monitors (according to Sony) is about 30,000 hours-- which translates to approximately 15 years at 8 hours a day, 5 days a week (comparable, or maybe a bit better, than a CRT-based monitor).



Satellite TV On PC

Technology is advancing so fast that now one can watch satellite TV or hear radio on a home PC. All you need is special hardware known as PCTV cards that are of two kinds. One kind needs to be installed in the PC while the other kind is an external box that plugs into the PC’s USB port.
There are cards that use the PC’s infrastructure to decode satellite signals and allow users to enjoy free-to-air digital television and radio programs. There are cards that have built-in processors that allow TV viewing in a separate window while the PC runs other programs. Both kinds of cards can be utilized to receive Broadband Internet via Satellite. Requests are made using a telephone line but data is received at 40MB per second via the satellite dish.
To view satellite TV on your PC you would require a minimum processor that is Pentium II 333 MHz, an operating system like Microsoft Windows 98/ME/2000/XP, as well as hardware consisting of sound card, spare USB slot, and a CD Rom drive. If you are a computer geek you could in addition to the cards have a Windows media player, real player, or quick time player all of which will take you to the next level of viewership.
The options are many. The PC can be directly connected to a satellite dish by using a product like Hauppauge 3000 or through the Internet cable; or via the satellite box (run an aerial lead from the RF output socket of the Shy Digibox to the input aerial socket on a standard PC TV card or USB TV adapter). Direct TV and Dish TV both recommend using a connection via their proprietary satellite TV receiver box as ideal.
With a PC-TV-Radio one can simultaneously or alternately watch regular TV, a movie, or sports, and enjoy crystal clear music while writing, checking mail, telewebbing, or surfing the Internet. The options are astounding one can download and record favorite programs, record music, and be creative.
Free Satellite TV provides detailed information on Free Satellite TV, Free Satellite TV Systems, Free Satellite TV on PC, Free Satellite TV Offers and more. Free Satellite TV is affiliated with Dish Satellite Network.

Changing MAC address In Windows XP/Vista and Linux

I guess you've never heard of this tutorial, but I am here just to repeat it.
Ok go directly to the problem...!!!!

There are two ways change MAC address on Windows.

Method #1: Changing MAC address by changing NIC properties from Device Management System.

This is depending on the type of Network Interface Card (NIC) you have. If you have a card that doesn’t support Clone MAC address, then you have to go to second method.

a) Go to Start->Settings->Control Panel and double click on Network and Dial-up Connections.

b) Right click on the NIC you want to change the MAC address and click on properties.

c) Under "General" tab, click on the "Configure" button

d) Click on "Advanced" tab

e) Under "Property section", you should see an item called "Network Address" or "Locally Administered Address", click on it.









f) On the right side, under "Value", type in the New MAC address you want to assign to your NIC. Usually this value is entered without the "-" between the MAC address numbers.

g) Goto command prompt and type in "ipconfig /all" or "net config rdr" to verify the changes. If the changes are not materialized, then use the second method.

h) If successful, reboot your system.

Method #2: This should work on all Windows 2000/XP/Vista systems

a) Go to Start -> Run, type "regedt32" to start registry editor. Do not use "Regedit".

b) Go to "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}". Double click on it to expand the tree. The subkeys are 4-digit numbers, which represent particular network adapters. You should see it starts with 0000, then 0001, 0002, 0003 and so on.

c) Find the interface you want by searching for the proper "DriverDesc" key.

d) Edit, or add, the string key "NetworkAddress" (has the data type "REG_SZ") to contain the new MAC address.

e) Disable then re-enable the network interface that you changed (or reboot the system).

Getting MAC address from command line

a) Go to Start -> Run (or win key   R) type "cmd" then press Enter.

b) type "getmac" at the console window. Windows will show you MAC address of all NIC (ethernet and wireless) NIC on your computer.

Spoof MAC address in Linux
To change/clone your MAC address in Linux (and most *nix system) is very easy to do. All it takes is two easy to script commands:

ifconfig eth0 down hw ether 01:02:10:B0:80:A1
ifconfig eth0 up

eth0 = enthernet 0
01:02:10:B0:80:A1 = new MAC address you want to change to.

Yes, it is very easy to change MAC address without use any third party script/application. You can change your MAC address anytime that you need.

good luck and success, do not never feel bored to increase knowledge
!!!!

Compression, Encryption, Deduplication, and Replication: Strange Bedfellows

One of the great ironies of storage technology is the inverse relationship between efficiency and security: Adding performance or reducing storage requirements almost always results in reducing the confidentiality, integrity, or availability of a system.

Many of the advances in capacity utilization put into production over the last few years rely on deduplication of data. This key technology has moved from basic compression tools to take on challenges in the fields of replication and archiving, and is even moving into primary storage. At the same time, interconnectedness and the digital revolution has made security a greater challenge, with focus and attention turning to encryption and authentication to prevent identity theft or worse crimes. The only problem is, most encryption schemes are incompatible with compression or deduplication of data!

Incompatibility of Encryption and Compression

Consider a basic lossless compression algorithm: We take an input file consisting of binary data and replace all repeating patterns with a unique code. If a file contained the sequence, “101110″ eight hundred times in a row, we could replace the whole 4800-bit sequence with a much smaller sequence that says “repeat this eight hundred times”. In fact, this is exactly what I did (using English) in the previous sentence! This basic concept, called run-length encoding, illustrates how most modern compression technology functions.
Replace the sequence of identical bits with a larger block of data or an entire file and you have deduplication and single-instance storage! In fact, as the compression technology gains access to the underlying data, it can become more and more efficient. The software from Ocarina, for example, actually decompresses jpg and pdf files before recompressing them, resulting in astonishing capacity gains!

Now let’s look at compression’s secretive cousin, encryption. It’s only a small intellectual leap to use similar ideas to hide the contents of a file, rather than just squashing it. But encryption algorithms are constantly under attack, so some very smart minds have come up with some incredibly clever methods to hide data. One of the most important advances was public-key cryptography, where two different keys are used: A public key used for writing, and a private key to read data. This same technique can be used to authenticate identity, since only the designated reader would (in theory) have the key required.
Cryptography has become exceedingly complicated lately in response to repeated attacks. Most compression and encryption algorithms are deterministic, meaning that identical input always yields the same output. This is unacceptable for strong encryption, since a known plaintext attack can be used with the public key to reveal the contents. Much work has focused on eliminating residues of the original data from the encrypted version, as illustrated brilliantly on Wikipedia with the classic Linux “tux” image. The goal is to make the encrypted data indistinguishable from random “noise”.
What happens when we mix these powerful technologies? Deduplication and encryption defeat each other! Deduplication must have access to repeating, deterministic data, and encryption must not allow this to happen. The most common solution (apart from skipping the encryption) is to place the deduplication technology first, allowing it access to the raw data before sending it on to be encrypted. But this leaves the data unprotected longer, and limits the possible locations where encryption technology can be applied. For example, an archive platform would have to encrypt data internally, since many now include deduplication as an integral component.
Why do we prefer compression to encryption? Simply because that’s where the money is! If we can cut down on storage space or WAN bandwidth, we see cost avoidance or even real cost savings! But if we “waste” space by encrypting data, we only save money in the case of a security breach.

A Glimmer of Hope

I had long thought this was an intractable problem, but a glimmer of hope recently presented itself. My hosting provider allows users to back up their files to a special repository using the rsync protocol. This is pretty handy, as you can imagine, but I was concerned about the security of this service. What happens if someone gains access to all of my data by hacking their servers?
At first, I only stored non-sensitive data on the backup site, but this limited its appeal. So I went looking for something that would allow me to encrypt my data before uploading it, and I discovered two interesting concepts: rsyncrypto and gzip-rsyncable.
rsync is a solid protocol, reducing network demands by only sending the changed blocks of a file. But, as noted, compression and encryption tools change the whole file even if only a tiny bit has been altered. A few years back, the folks behind rsync (who also happen to be the minds behind the Samba CIFS server) developed a patch for gzip which causes it to compress files in chunks rather than in their entirety. This patch, called gzip-rsyncable, hasn’t been added to the main source even after a dozen years, but yields amazing results in accelerating rsync performance.
The same technique was then applied to RSA and AES cryptography to create rsyncrypto. This open source encryption tool makes a simple tweak to the standard CBC encryption schema (reusing the initialization vector) to allow encrypted files to be sent more efficiently over rsync. In fact, it relies on gzip-rsyncable to work its magic. Of course, the resulting file is somewhat less secure, but it is probably more than enough to keep a casual snooper at bay.
Both of these tools are similar to modern deduplication techniques in that they chop files up into smaller, variable-sized blocks before working their magic. And the result is awesome: I modified a single word in a large word document that I had previously encrypted and stored at the backup site and was able to transfer just a single block of the new file in an instant rather than a few minutes. My only real issue is the lack of integration of all of these tools: I had to write a bash script to encrypt  my files to a temporary directory before rsyncing them. I wish they could be integrated with the main gzip and rsync sources!
If you are interested in trying out these tools for yourself, and if you use a Mac, you are in luck: Macports offers both tools as simple downloads! Just install macports, type “sudo port install gzip +rsyncable” to install gzip with the –rsyncable flag, then type “sudo port install rsyncrypto” and you’re done! I’ll post more details here if there is interest.

Ref : http://blog.fosketts.net/2009/02/05/compression-encryption-deduplication-replication/

Computer Viruses


A virus is a program designed by a computer programmer (malicious hacker) to do a certain unwanted function. The virus program can be simply annoying like displaying a happy face on the user's screen at a certain time and date. It can also be very destructive and damage your computer's programs and files causing the computer to stop working.
The reason why hackers create viruses are open for speculation. The most quoted reason is simply to see if it can be done. Other reasons are Ludite based "smash the machine" motivations, antiestablishment/anti-corporate actions, criminal intent, and various others that range into the "conspiracy theory" realm.
Viruses take two basic forms
One is a boot sector viruses which infect the section of a disk that is first read by the computer. This type of virus infects the boot or master section of any disks that it comes in contact with. The second is a program virus that infects other programs when the infected program is run or executed. Some viruses infect both and others change themselves (polymorphic) depending on the programs they encounter.
Though viruses do not damage computer hardware there have been attempts to create programs that will do things like run the hard drive until it fails or lodge itself in the computer's clock (which has a rechargeable battery) allowing it to remain active even months after the computer has been unplugged. Other viruses affect certain microchips (BIOS chip for instance). These microchips need to be modified under normal computer use but the virus program can produce changes which cause them to fail. Other viruses will affect the characters or images displayed on the screen which may give the impression of monitor failure.
Viruses can cause a great deal of damage to the computers it infects and can cost a lot of time and money to correct it.
Computer viruses have been around for a long time, even before computers became widely used and they will likely remain with us forever. For that reason computer users will always need ways to protect themselves from virus programs. The main, common feature of a virus is that it is contagious! Their sole purpose is to spread and infect other computers.
A computer gets a virus from an infected file.
The virus might attach themselves to a game, a program (both shareware and commercial) or a file downloaded from a bulletin board or the Internet.
You cannot get a virus from a plain email message or from a simple text file! That is because the virus needs to be 'run' or executed before it can take effect. This usually happens when the user tries to open an infected program, accesses an infected disk or opens a file with an infected macro or script attached to it. A plain email message is made up of text which does not execute or run when opened.
Modern email programs provide the ability to allow users to format email messages with HTML and attach scripts to them for various purposes and it is possible for a malicious hacker to attempt to spread a virus by building a virus script into an HTML type of email message.
When you are accepting software or scripts on Internet sites or reading mail from unknown senders it is best not to run a program from that site or sender without checking it with an anti-virus program first.
Protect yourself
You can take safeguards against virus infection. The first thing is to get an anti-virus program. Most reputable companies that create virus protection programs release an evaluation copy that an Internet user can download for free and use for a certain amount of time. This anti-virus program will be able to check your computer for viruses and repair damage or delete files that are infected with viruses. You may have to replace infected files that cannot be repaired.
The second thing you can do is purchase a copy of the program. The reason for this is that viruses are constantly being created. When you purchase an anti-virus program you are also purchasing periodical updates which keep your anti-virus program up-to-date and able to deal with new viruses as they are encountered. Commercial virus programs also allow the user to customize when and how the program will check the computer for viruses. You will need to renew this updating service periodically.
If you find that your computer has been infected with a virus use an anti-virus program to clean your computer and make sure to check all the disks that you use. This includes all the hard drives on your computer(s) and all your floppy disks and CDs as well as any media that you save information on. Remember that the virus can easily re-infect your computer from one infected file!
If you have to reload your computer programs, use the original program disks. You may want to check your original disks before reinstalling the software. If your original disks are infected contact the distributor to get replacements.
Always take the time to ensure that your computer is properly protected. Spending money on a good virus checking program could save you hundreds of dollars and lots of time later.
A discussion of viruses would not be complete without mentioning hoaxes. Malicious people without programming skills will send out fake virus warnings causing people to take unnessary measures which often cause your computer harm. One example tries to get the unsuspecting computer user to delete an important system file by warning them that it is a virus. A legitimate virus warning will provide a link to a website operated by an anti-virus company with more information about that virus. Don't forward a virus warning until you have check out whether it is legitimate.

Kernel Definition







     In computing, the 'kernel' is the central component of most computer operating systems, it can be thought of as the bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components). Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.
Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system.A range of possibilities exists between these two extremes.

The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources. Typically, the resources consist of:
  • The Central Processing Unit (CPU, the processor). This is the most central part of a computer system, responsible for running or executing programs on it. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time)
  • The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available.
  • Any Input/Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device).
Key aspects necessary in resource managements are the definition of an execution domain (address space) and the protection mechanism used to mediate the accesses to the resources within a domain.
Kernels also usually provide methods for synchronization and communication between processes (called inter-process communication or IPC).
A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other.
Finally, a kernel must provide running programs with a method to make requests to access these facilities.

How to Choose the Best Graphics Card


In order to choose the best graphics card, it’s important to look at how you use your computer. If your computer is dedicated to word processing and Web browsing, you don’t need an expensive, high-end graphics card. If you’re a serious gamer looking for a high-performance graphics card, you should seek a card that measures in the top of its class in specifications and includes the built-in memory needed to keep everything running smoothly.


Know the Competition
The two most popular graphics card chipsets are nVidia’s GeForce and ATI’s Radeon. When you shop for a graphics card, you aren’t looking at nVidia and ATI as manufacturers, but rather as the type of chipset inside the graphics card. You’ll find that a single manufacturer may offer both types of chipset, as each is better in certain types of applications. For example, graphics-card maker ASUS offers both nVidia graphics cards and ATI graphics cards. Other manufacturers, such as Sapphire, offer one chipset exclusively.
for
Which one is better? It comes down to a number of factors. Some graphics cards work better with certain computer processors, while others perform better in certain video modes. For example, video game enthusiasts believe that ATI video cards work better in computers with AMD processors, since AMD owns ATI.
Choosing a chipset is largely a matter of subjective preference. If you’re looking for a graphics card to play your favorite video games, check those video games’ Web sites. Game developers test their products with multiple graphics cards before releasing them and can usually give players an idea of which graphics cards work best with their games and which ones are likely to have problems. If you can’t figure out which card is best from a specific game site, run a Web search for graphics cards and the game you want to play. Game-review sites and avid gamers on online forums all have an opinion, and chances are good that you can find some feedback for the performance of particular graphics cards with your favorite game.
More...

Five Ways To Improve Computers Performance


You cannot avoid deteriorating PC performance even if you own an excellent computer with high-end system components.
To prevent system slowdown, you need to follow a routine that helps you scan and repair all computer-related problems.
You can adopt multiple methods to optimize the performance of your system. Listed here are the top 5 ways that you can enhance the performance of your PC:
  • Regular Registry Cleaning
  • Cleaning And Defragging Hard Drive
  • Regular Malware Scanning
  • Managing Programs that Load during System Start-up
  • Perform Memory Tests
Frequent Registry Cleaning
Registry cleaning can be termed as the most important activity that enables you to enhance the performance of a slow computer by removing unwanted information from the registry. All Windows operating systems (OS), such as Windows XP and Windows Vista, store all hardware and software configuration information in the registry, which is a centralized hierarchal database. The registry also stores information related to system settings and user preferences.
Over time, the registry starts expanding at an uncontrollable speed and may eventually get damaged and fragmented. This affects the data access time and generates several Windows Vista errors. You may often face unexpected system breakdowns if the problem intensifies.
However, it is possible to prevent this situation by regularly scanning and removing redundant, outdated, and invalid information from the registry. You can perform these tasks by using a dependable, professional and user-friendly registry cleaner utility that meets your specific requirements.
Cleaning And Defragging Hard Drive
As unwanted cookies and programs keep adding to the hard disk of your computer, it often gets congested. If you want to fix slow computer problems, you must use the Disk Cleanup utility provided in your Windows XP and Vista ‘System Tools.’ This will enable you to remove unnecessary junk and free up space on your hard disk. Additionally, you must use a Disk Defragmenter tool to defragment your system’s hard disk and make your data files contiguous and Windows programs easily accessible on your PC. These tools help you in your PC optimizing activities to a great extent.
Regular Malware Scanning
Many malicious programs, such as adware, spyware, Trojans, viruses, and worms may drastically affect the speed of your computer. They keep adding malicious files and registry entries, which may cause multiple system errors that can damage your computer and data files. Therefore, you must identify and delete these files by using a reliable anti-virus and anti-spyware tool.
Managing Programs that Load at System Start-up
Many processes are automatically added and start loading themselves when you start your computer. These start-up programs continue running in the background and slow down your system. If you want to avoid system slow-down, you must deactivate un-required programs from loading at the system start-up. This can be done by using the System Configuration utility (msconfig). You can also use an advanced Registry cleaning tool to manage start-up programs.


Performing Memory Tests
System slowdowns may also occur due to malfunctioning RAM or memory chips. In order to identify the cause, you may perform a memory test with the help of a memory test tool. There are several memory testing tools available on the Internet and you can search for and download a tool from a reliable source. Next, use this tool to run diagnostic tests on memory modules on your system, detect the malfunctioning module, and then replace it.
Find more information about the Windows regsitry and improving your computer performance at Instant-Registry-Fixes.org.
good luck

IP Address Versions

IP version 6 addresses
The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the Internet's addressing capability. The permanent solution was deemed to be a redesign of the Internet Protocol itself. This next generation of the Internet Protocol, aimed to replace IPv4 on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995[3][4] The address size was increased from 32 to 128 bits or 16 octets, which, even with a generous assignment of network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address space provides the potential for a maximum of 2128, or about 3.403 × 1038 unique addresses.

The new design is not based on the goal to provide a sufficient quantity of addresses alone, but rather to allow efficient aggregation of subnet routing prefixes to occur at routing nodes. As a result, routing table sizes are smaller, and the smallest possible individual allocation is a subnet for 264 hosts, which is the size of the square of the size of the entire IPv4 Internet. At these levels, actual address utilization rates will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment—that is the local administration of the segment's available space—from the addressing prefix used to route external traffic for a network. IPv6 has facilities that automatically change the routing prefix of entire networks should the global connectivity or the routing policy change without requiring internal redesign or renumbering.

The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is not the need to have complex address conservation methods as used in classless inter-domain routing (CIDR).

All modern[update] desktop and enterprise server operating systems include native support for the IPv6 protocol, but it is not yet widely deployed in other devices, such as home networking routers, voice over Internet Protocol (VoIP) and multimedia equipment, and network peripherals.

Example of an IPv6 address:

2001:0db8:85a3:08d3:1319:8a2e:0370:7334




IPv6 private addresses

Just as IPv4 reserves addresses for private or internal networks, there are blocks of addresses set aside in IPv6 for private addresses. In IPv6, these are referred to as unique local addresses (ULA). RFC 4193 sets aside the routing prefix fc00::/7 for this block which is divided into two /8 blocks with different implied policies (cf. IPv6) The addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if sites merge or packets are misrouted.

Early designs (RFC 3513) used a different block for this purpose (fec0::), dubbed site-local addresses. However, the definition of what constituted sites remained unclear and the poorly defined addressing policy created ambiguities for routing. The address range specification was abandoned and must no longer be used in new systems.

Addresses starting with fe80: — called link-local addresses — are assigned only in the local link area. The addresses are generated usually automatically by the operating system's IP layer for each network interface. This provides instant automatic network connectivity for any IPv6 host and means that if several hosts connect to a common hub or switch, they have an instant communication path via their link-local IPv6 address. This feature is used extensively, and invisibly to most users, in the lower layers of IPv6 network administration (cf. Neighbor Discovery Protocol).

None of the private address prefixes may be routed in the public Internet.

Hibernate your computer for a faster start up

Windows XP takes a lot of time to start, If you hate waiting like me then you must consider using hibernation option instead of turnoff for a faster startup.


To do this,

1.Hold down the “shift” key in the shutdown dialog to change the “Stand By” to “Hibernate“.

2. Or You can just press H to hibernate instantly.

To enable hibernation go to control panel and click on power options and click on hibernation tab and check the enable hibernation box. To enable hibernation you should have sufficient free space in C drive. Check the screenshot below if you have any problems




You can even use the Power Control Panel to configure your power button to hibernate. To do this click on the advanced tab and modify power buttons options according to your convenience.( Check the screen shot below)





We shall see how to use Microsoft bootvis to reduce the startup time of your computer in next post. Subscribe to our feed to stay connected.

Enable Multiple Concurrent Remote Desktop Connections in Windows XP SP2

All Windows XP machine has Remote Desktop service that allows the computer to be remotely connected, accessed and controlled from another computer or host. However, all windows XP machine except Windows XP Professional and Windows XP Media Center Edition only allows one concurrent remote desktop connection from a single user. But Windows XP Professional and Windows XP Media Center Edition will allow multiple remote desktop sessions or multiple concurrent remote desktop connections from a single user.
Whenever a remote user client tries to connect to a Windows XP host, the local user will disconnect with the local console screen locking them, with or without his or her permission.
Here’s a hack to unlock the single user limitation and enable multiple concurrent remote desktop connection session support in Windows XP SP2 .So that unlimited users can simultaneously connect to a computers same user via Remote Desktop. Follow the steps below:
  1. Download ConcurrentRemoteDesk.rar (To download click on the link) and extract the file.
  2. Restart the computer and boot in Safe Mode by pressing F8 during initial boot up and by selecting Safe Mode. (This step is only required if you’re currently running Windows Terminal Services or Remote Desktop service, and System File Protection has to be skipped and bypassed, else it will prompt error).
  3. Go to %windir%\System32 %windir%\System32\dllcacheand make a backup copy of termsrv.dll in both folders.
  4. Copy the downloaded termsrv.dll into %windir%\System32,%windir%\ServicePackFiles\i386 (if exist), %windir%\System32\dllcache
  5. Then run the downloaded Multiremotedesk.reg to merge the registry value into registry or you can add it manually
    [HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Terminal Server\Licensing Core]
    “EnableConcurrentSessions”=dword:00000001 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
    “EnableConcurrentSessions”=dword:00000001
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
    “AllowMultipleTSSessions”=dword:00000001
  6. Click on Start Menu -> Run command and type gpedit.msc and press Enter to open up the Group Policy Editor.
  7. Navigate to Computer Configuration -> Administrative Templates -> Windows Components -> Terminal Services.
  8. Enable Limit Number of Connections and set the number of connections to 3 (or more). This setting will allow more than one users to use the computer.
  9. Ensure the Remote Desktop is enabled in System Properties -> Remote tab by selecting the radio button for Allow users to connect remotely to this computer.
  10. Enable and turn on Fast User Switching in Control Panel -> User Accounts -> Change the way users log on or off.
  11. Restart the computer normally.
To uninstall, revert back to original termsrv.dll .You probably have to do it in Safe Mode if the Terminal Services is enabled and running.

Overcome Autom Reboot Problem in windows XP

        Many People Might have faced , cases when a system fault/error/crash ends up freezing the OS at the dreaded BSOD (Blue Screen Of Death), which displays the cause of the crash and gives some details about the state of the system when it crashed. The major annoyance is that it requires a “cold” reboot (reset) or complete power shut down, to remind you why those 2 buttons on the front of your PC case are meant for. And moreover, if you are a system administrator, requiring your server(s) to run non-stop 24/7, this can be a pain in Your head But have no fear, the fix is here… This Registry hack is valid for ALL NT, 2000, XP and 2003 releases. To bypass the BSOD altogether and enable the instant “Auto Reboot” feature, run Regedit and go to:


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl Right-click on the “AutoReboot” DWORD [REG_DWORD] Value in the right hand pane -> select Modify -> change it to read 1 (Auto Reboot enabled) -> click OK -> close the Registry Editor. 


Restart Windows for the change to take effect. From now on the OS will reboot upon locking up, right after writing to the crash log file (if enabled). To disable it, change the “AutoReboot” value back to 0 (default).

Operating System

      An operating system (OS) is an interface between hardware and user which is responsible for the management and coordination of activities and the sharing of the resources of the computer that acts as a host for computing applications run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type. Some of the oldest models may, however, use an embedded operating system that may be contained on a data storage device.


Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface (SUI) like typing commands by using command line interface (CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. (Whether the user interface should be included as part of the operating system is a point of contention.)
Common contemporary operating systems include BSD, Darwin (Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7). While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems, although the Microsoft Windows line of operating systems has almost 90% of the client PC market.

From Wikipedia


History of the Hard Drives

The hard disk drive was invented by some IBM engineers working under Rey Johnson at IBM in San Jose, CA, in about 1952 to 1954. I worked at IBM from 1965 to 1981 and got to meet and work with some of those men - Rey Johnson, John Lynott, Don Cronquist, Bob Schneider and Lou Stevens come to mind right now.
In 1965 (I think that was the year) a number of engineers left IBM (they were known as the "dirty dozen" within IBM) and founded Memorex. Al Shugart, one of them, later left Memorex and founded first Shugart Associates where the 5 1/4" floppy disk drive was a major product, then Seagate Technology, which effectively started today's industry of small hard disk drives.
The early drives almost all had linear actuators, that is, they moved the heads across the disks in a straight line, using a carriage with wheels. It was only later that rotary actuators, where the heads are held at the tips of a comb-like array and they swing back and forth like a gate, became popular. Because the rotary actuator is cheaper, it's now the standard for all hard disk drives, and that's what I'll be talking about.
The first IBM RAMAC disk drive had a couple of dozen disks, each about 2 feet in diameter, and ONE head! The head was moved from disk to disk and back and forth on each disk using a system of cables and pulleys and stepping motors. The added speed of having at least one head for each disk surface, and of using both surfaces of each disk, soon became obvious, and drives began to look pretty "modern" by 1960, although they were vastly larger and more expensive. Whether the heads are moved in a straight line or swung in an arc, something has to provide the force and the control to move them and keep them in the right place.
Stepping motors, hydraulic actuators and voice coil motors have been used to provide the motive force. Stepping motors have a built-in capability to hold in one position. Hydraulic actuators and voice coil motors (VCMs) provide force, but can't hold a position with great accuracy. A rack with detent pawls has been used, but nowadays a servo system is used, with the positioning information recorded on the disks.
So you can have a disk drive with a stepping motor and you don't need a "servo" or you can have a disk drive with "servo data" recorded on the disks. A stepping motor is simpler (at least in concept) and cheaper, but it's slower in seek time because it isn't really very powerful, and it isn't capable of really, really fine precision. A VCM can provide enormous forces, but it needs control, which the servo system provides. However servo feedback systems are complicated and you have to pre-record the positioning data somehow (usually on one or more of the disks), and that takes up space that could be used for "real" data. On the other hand, servo systems can provide incredibly precise positioning.
The style of hard disk drive we use today began to emerge in the early 1980s. I think it was Maxtor, under Frank Gibeau, where the first high-volume 5 1/4" disk drives with a rotary actuator, a VCM and a servo system were produced.
In 1986, Finis Conner left Seagate and founded Conner Peripherals along with John Squires, and they built the first high-volume 3 1/2" disk drives. The first one, 40 MB, was called the "Fat 40". Not only did they popularize the new smaller "form factor", but they were the first to have an "embedded servo" or "sector servo" in volume.
Meanwhile, Quantum Corporation had been building 8" and 5 1/4" disk drives since 1980, and in the mid 1980's they (actually I think it was Joel Harrison and Bill Moon) saw an opportunity with the 3 1/2" form factor and invented the "hard card", a disk drive on an expansion card that you could just plug into your AT. And that's how the IDE interface got started.
By the way, Quantum used a rather odd variant of the servo system for many years, where the servo information was actually very fine lines etched on a piece of glass attached to the actuator, and read with a photocell. It's actually more complicated, but that's a subject for another discussion.
Around 1990, laptops began to appear, and with them came the 2 1/2" form factor. I never worked closely with 2 1/2" drives, and I'm not very well versed in their historical development.
Way back in the old days, when the world was young and all, , a single disk surface was reserved for "servo data" on disk drives that had a voice-coil actuator. Drives with stepper motors didn't need that stuff, and could justly claim that they didn't have to "waste" disk surface on "servo data".
Servo data, by the way, is information that's pre-recorded on the disk and specially formatted to make it possible for the drive to know where its heads are positioned. A stepper motor has, if no errors have occurred, the equivalent information built into its mechanical structure.
But during the late 1970s and early 1980s, techniques were developed that allowed the servo data to be written on the same surfaces that hold the regular user data. There were several schemes proposed and actually implemented, but the one that has taken hold is called "sector servo", where some number of regions on each data track are specially reserved for servo information. Because the sectors are physically coherent on each disk surface, they're commonly called "servo spokes". I've seen drives with as few as 64 spokes per revolution and as many as 128.
I believe the first major manufacturer to use sector servo was Conner--sector servo and the 3 1/2" form factor were their keys to success when they started in (I think) 1986. [At the time, I was one of the majority of old disk drive people who thought they were headed for disaster. Later, I worked there. Shows how smart I am!]
While there may still be drives manufactured with a dedicated servo surface, I think the last major manufacturer to use one was Seagate, up until a couple of years ago. The first Barracuda drive had a dedicated servo surface. Later Barracudas, and I think all current Seagates, use sector servo technology.

Disk drive production has become easier nowadays - there are even cheap hard drive deals which indicate cost-efficient manufacturing of these devices.
www.logicsmith.com

Measurements of Data Speed

Today there are generally 2 ways of describing data transfer speeds: in bits per second, or in bytes per second. As explained above, a byte is made of 8 bits. Network engineers still describe network speeds in bits per second, while your internet browser would usually measure a file download rate in bytes per second. A lowercase "b" usually means a bit, while an uppercase "B" represents a byte.

Bps 

Known as bits per second, bps was the main way of describing data transfer speeds several decades ago. Bps was also known as the baud rate, therefore, a 600 baud modem was one which could transfer data at around 600bps.

Kbps 

kilobits per second, or 1000 bits per second.

Mbps

1,000,000 bits per second (usually used in describing internet download/upload speeds).

Gbps

1,000,000 kilobits per second or 1,000,000,000 bits per second. This term is most commonly heard in local area networks, where the close proximity of machines allows for lightning fast data transfer rates.

Names for different sizes of data

When choosing a new computer we come across terms such as "300GB hard drive" and "500MB download", and to the uninitiated, this can be somewhat disconcerting. Data in a computer is represented in a series of bits. Since the birth of computers, bits have been the language that control the processes that take place inside that mysterious black box called your computer. In this article, we look at the very language that your computer uses to do its work.

Bit 

A bit is simply a 1 or a 0. A true or a false. It is the most basic unit of data in a computer. It's like the dots and dashes in Morse code for a computer. It's also called machine language.

Byte 

In computer science a byte is a unit of measurement of information storage, that equals '8 bits', can be used to represent letters and numbers. For example, the number 01000001 is 8 bits long, and represents the letter A in ASCII.

kB 

A kB is a unit of data that equals 1024 bytes. This is because 8 bytes cannot contribute into 1000.

MB 

Megabyte is 1024kB squared, 10242

GB

A gigabyte is a unit of data storage worth a billion bytes meaning either exactly 1 billion bytes (10243) or approximately 1.07 billion bytes. More often than not in advertising, Gigabytes are presented as 1 billion bytes and not 10243 (read the fine print in your adverts!). This explains why a freshly formatted 500GB hard drive shows up at a 450GB one instead. Not too long ago many people were discussing storage in Megabytes. These days, storage has become so cheap that having Gigabytes is considered the norm.

TB 

A terabyte is 10244 and is defined as about one trillion bytes, or 1024 gigabytes. Data centres such as those operated by Google handle thousands if not millions of terabytes of data each day. As storage becomes cheaper and faster, terabytes are becoming a commonly heard term.

PB 

 A petabyte is a unit of information or computer storage equal to one quadrillion bytes (10245). Microsoft stores on 900 servers a total of approximately 14 petabytes.

Chat Room





Software


Software, commonly known as programs, consists of all the electronic instructions that tell the hardware how to perform a task. These instructions come from a software developer in the form that will be accepted by the operating system that they are based on. For example, a program that is designed for the windows operating system will only work for that operating system. Compatibility of software will vary as the design of the software and the operating system differ. A software that is designed for Windows XP may experience compatibility issue when running under Windows 2000 or NT.
Software can also be described as a collection of routines, rules and symbolic languages that direct the functioning of the hardware.
Software is capable of performing specific tasks, as opposed to hardware which only perform mechanical tasks that they are mechanically designed for. Practical computer systems divide software systems into three major classes:
  1. System software: Helps run computer hardware and computer system. Computer software includes operating systems, device drivers, diagnostic tools and more.
  2. Programming software: Software that assists a programmer in writing computer programs.
  3. Application software: Allows users to accomplish one or more tasks.
The term "software" is sometimes used in a broader context to describe any electronic media content which embodies expressions of ideas such as film, tapes, records, etc. Software is the electronic instruction that tells the computer to do a task.

Hardware

The hardware are the parts of computer itself including the Central Processing Unit (CPU) and related microchips and micro-circuitry, keyboards, monitors, case and drives (hard, CD, DVD, floppy, optical, tape, etc...). Other extra parts called peripheral components or devices include mouse, printers, modems, scanners, digital cameras and cards (sound, colour, video) etc... Together they are often referred to as a personal computer.
Central Processing Unit - Though the term relates to a specific chip or the processor a CPU's performance is determined by the rest of the computer's circuitry and chips.
Currently the Pentium chip or processor, made by Intel, is the most common CPU though there are many other companies that produce processors for personal computers. Examples are the CPU made by Motorola and AMD.


Chip



With faster processors the clock speed becomes more important. Compared to some of the first computers which operated at below 30 megahertz (MHz) the Pentium chips began at 75 MHz in the late 1990's. Speeds now exceed 3000+ MHz or 3 gigahertz (GHz) and different chip manufacturers use different measuring standards (check your local computer store for the latest speed). It depends on the circuit board that the chip is housed in, or the motherboard, as to whether you are able to upgrade to a faster chip. The motherboard contains the circuitry and connections that allow the various component to communicate with each other.
Though there were many computers using many different processors previous to this I call the 80286 processor the advent of home computers as these were the processors that made computers available for the average person. Using a processor before the 286 involved learning a proprietary system and software. Most new software are being developed for the newest and fastest processors so it can be difficult to use an older computer system.
Keyboard - The keyboard is used to type information into the computer or input information. There are many different keyboard layouts and sizes with the most common for Latin based languages being the QWERTY layout (named for the first 6 keys). The standard keyboard has 101 keys. Notebooks have embedded keys accessible by special keys or by pressing key combinations (CTRL or Command and P for example). Ergonomically designed keyboards are designed to make typing easier. Hand held devices have various and different keyboard configurations and touch screens.
Some of the keys have a special use. There are referred to as command keys. The 3 most common are the Control or CTRL, Alternate or Alt and the Shift keys though there can be more (the Windows key for example or the Command key). Each key on a standard keyboard has one or two characters. Press the key to get the lower character and hold Shift to get the upper.
Removable Storage and/or Disk Drives - All disk need a drive to get information off - or read - and put information on the disk - or write. Each drive is designed for a specific type of disk whether it is a CD, DVD, hard disk or floppy. Often the term 'disk' and 'drive' are used to describe the same thing but it helps to understand that the disk is the storage device which contains computer files - or software - and the drive is the mechanism that runs the disk.Mouse
Digital flash drives work slightly differently as they use memory cards to store information so there are no moving parts. Digital cameras also use Flash memory cards to store information, in this case photographs. Hand held devices use digital drives and many also use memory cards.
Mouse - Most modern computers today are run using a mouse controlled pointer. Generally if the mouse has two buttons the left one is used to select objects and text and the right one is used to access menus. If the mouse has one button (Mac for instance) it controls all the activity and a mouse with a third button can be used by specific software programs.
One type of mouse has a round ball under the bottom of the mouse that rolls and turns two wheels which control the direction of the pointer on the screen. Another type of mouse uses an optical system to track the movement of the mouse. Laptop computers use touch pads, buttons and other devices to control the pointer. Hand helds use a combination of devices to control the pointer, including touch screens.

>> Note: It is important to clean the mouse periodically, particularly if it becomes sluggish. A ball type mouse has a small circular panel that can be opened, allowing you to remove the ball. Lint can be removed carefully with a tooth pick or tweezers and the ball can be washed with mild detergent. A build up will accumulate on the small wheels in the mouse. Use a small instrument or finger nail to scrape it off taking care not to scratch the wheels. Track balls can be cleaned much like a mouse and touch-pad can be wiped with a clean, damp cloth. An optical mouse can accumulate material from the surface that it is in contact with which can be removed with a finger nail or small instrument.
Monitors - The monitor shows information on the screen when you type. This is called outputting information. When the computer needs more information it will display a message on the screen, usually through a dialog box. Monitors come in many types and sizes. The resolution of the monitor determines the sharpness of the screen. The resolution can be adjusted to control the screen's display..
Most desktop computers use a monitor with a cathode tube or liquid crystal display. Most notebooks use a liquid crystal display monitor.

To get the full benefit of today's software with full colour graphics and animation, computers need a color monitor with a display or graphics card.
Printers - The printer takes the information on your screen and transfers it to paper or a hard copy. There are many different types of printers with various levels of quality. The three basic types of printer are; dot matrix, inkjet, and laser.
  • Dot matrix printers work like a typewriter transferring ink from a ribbon to paper with a series or 'matrix' of tiny pins.
  • Ink jet printers work like dot matrix printers but fires a stream of ink from a cartridge directly onto the paper.
  • Laser printers use the same technology as a photocopier using heat to transfer toner onto paper.
Modem - A modem is used to translate information transferred through telephone lines, cable or line-of-site wireless.
The term stands for modulate and demodulate which changes the signal from digital, which computers use, to analog, which telephones use and then back again. Digital modems transfer digital information directly without changing to analog.
Modems are measured by the speed that the information is transferred. The measuring tool is called the baud rate. Originally modems worked at speeds below 2400 baud but today analog speeds of 56,000 are standard. Cable, wireless or digital subscriber lines can transfer information much faster with rates of 300,000 baud and up.
Modems also use Error Correction which corrects for transmission errors by constantly checking whether the information was received properly or not and Compression which allows for faster data transfer rates. Information is transferred in packets. Each packet is checked for errors and is re-sent if there is an error.
Anyone who has used the Internet has noticed that at times the information travels at different speeds. Depending on the amount of information that is being transferred, the information will arrive at it's destination at different times. The amount of information that can travel through a line is limited. This limit is called bandwidth.
There are many more variables involved in communication technology using computers, much of which is covered in the section on the Internet.
Scanners- Scanners allow you to transfer pictures and photographs to your computer. A scanner 'scans' the image from the top to the bottom, one line at a time and transfers it to the computer as a series of bits or a bitmap. You can then take that image and use it in a paint program, send it out as a fax or print it. With optional Optical Character Recognition (OCR) software you can convert printed documents such as newspaper articles to text that can be used in your word processor. Most scanners use TWAIN software that makes the scanner accessable by other software applications.
Digital cameras allow you to take digital photographs. The images are stored on a memory chip or disk that can be transferred to your computer. Some cameras can also capture sound and video.
Case - The case houses the microchips and circuitry that run the computer. Desktop models usually sit under the monitor and tower models beside. They come in many sizes, including desktop, mini, midi, and full tower. There is usually room inside to expand or add components at a later time. By removing the cover off the case you may find plate covered, empty slots that allow you to add cards. There are various types of slots including IDE, ASI, USB, PCI and Firewire slots.
Depending on the type notebook computers may have room to expand . Most Notebooks also have connections or ports that allows expansion or connection to exterior, peripheral devices such as monitor, portable hard-drives or other devices.
Cards - Cards are components added to computers to increase their capability. When adding a peripheral device make sure that your computer has a slot of the type needed by the device. 
Sound cards allow computers to produce sound like music and voice. The older sound cards were 8 bit then 16 bit then 32 bit. Though the human ear can't distinguish the fine difference between sounds produced by the more powerful sound card they allow for more complex music and music production. 
Colour cards allow computers to produce colour (with a colour monitor of course). The first colour cards were 2 bit which produced 4 colours [CGA]. It was amazing what could be done with those 4 colours. Next came 4 bit allowing for 16 [EGA and VGA ] colours. Then came 16 bit allowing for 1064 colours and then 24 bit which allows for almost 17 million colours and now 32 bit and higher allow monitors to display almost a billion separate colours. 
Video cards allow computers to display video and animation. Some video cards allow computers to display television as well as capture frames from video. A video card with a digital video camera allows computers users to produce live video. A high speed connection is required for effective video transmission. 
Network cards allow computers to connect together to communicate with each other. Network cards have connections for cable, thin wire or wireless networks. For more information see the section on Networks.
Cables connect internal components to the Motherboard, which is a board with series of electronic path ways and connections allowing the CPU to communicate with the other components of the computer.
Memory - Memory can be very confusing but is usually one of the easiest pieces of hardware to add to your computer. It is common to confuse chip memory with disk storage. An example of the difference between memory and storage would be the difference between a table where the actual work is done (memory) and a filing cabinet where the finished product is stored (disk). To add a bit more confusion, the computer's hard disk can be used as temporary memory when the program needs more than the chips can provide.
Random Access Memory or RAM is the memory that the computer uses to temporarily store the information as it is being processed. The more information being processed the more RAM the computer needs.
One of the first home computers used 64 kilobytes of RAM memory (Commodore 64). Today's modern computers need a minimum of 64 Mb (recommended 128 Mb or more) to run Windows or OS 10 with modern software.
RAM memory chips come in many different sizes and speeds and can usually be expanded. Older computers came with 512 Kb of memory which could be expanded to a maximum of 640 Kb. In most modern computers the memory can be expanded by adding or replacing the memory chips depending on the processor you have and the type of memory your computer uses. Memory chips range in size from 1 Mb to 4 Gb. As computer technology changes the type of memory changes as well making old memory chips obsolete. Check your computer manual to find out what kind of memory your computer uses before purchasing new memory chips.
http://www.grassrootsdesign.com/intro/hardware.php