Kernel Definition







     In computing, the 'kernel' is the central component of most computer operating systems, it can be thought of as the bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components). Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.
Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system.A range of possibilities exists between these two extremes.

The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources. Typically, the resources consist of:
  • The Central Processing Unit (CPU, the processor). This is the most central part of a computer system, responsible for running or executing programs on it. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time)
  • The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available.
  • Any Input/Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device).
Key aspects necessary in resource managements are the definition of an execution domain (address space) and the protection mechanism used to mediate the accesses to the resources within a domain.
Kernels also usually provide methods for synchronization and communication between processes (called inter-process communication or IPC).
A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other.
Finally, a kernel must provide running programs with a method to make requests to access these facilities.

How to Choose the Best Graphics Card


In order to choose the best graphics card, it’s important to look at how you use your computer. If your computer is dedicated to word processing and Web browsing, you don’t need an expensive, high-end graphics card. If you’re a serious gamer looking for a high-performance graphics card, you should seek a card that measures in the top of its class in specifications and includes the built-in memory needed to keep everything running smoothly.


Know the Competition
The two most popular graphics card chipsets are nVidia’s GeForce and ATI’s Radeon. When you shop for a graphics card, you aren’t looking at nVidia and ATI as manufacturers, but rather as the type of chipset inside the graphics card. You’ll find that a single manufacturer may offer both types of chipset, as each is better in certain types of applications. For example, graphics-card maker ASUS offers both nVidia graphics cards and ATI graphics cards. Other manufacturers, such as Sapphire, offer one chipset exclusively.
for
Which one is better? It comes down to a number of factors. Some graphics cards work better with certain computer processors, while others perform better in certain video modes. For example, video game enthusiasts believe that ATI video cards work better in computers with AMD processors, since AMD owns ATI.
Choosing a chipset is largely a matter of subjective preference. If you’re looking for a graphics card to play your favorite video games, check those video games’ Web sites. Game developers test their products with multiple graphics cards before releasing them and can usually give players an idea of which graphics cards work best with their games and which ones are likely to have problems. If you can’t figure out which card is best from a specific game site, run a Web search for graphics cards and the game you want to play. Game-review sites and avid gamers on online forums all have an opinion, and chances are good that you can find some feedback for the performance of particular graphics cards with your favorite game.
More...

Five Ways To Improve Computers Performance


You cannot avoid deteriorating PC performance even if you own an excellent computer with high-end system components.
To prevent system slowdown, you need to follow a routine that helps you scan and repair all computer-related problems.
You can adopt multiple methods to optimize the performance of your system. Listed here are the top 5 ways that you can enhance the performance of your PC:
  • Regular Registry Cleaning
  • Cleaning And Defragging Hard Drive
  • Regular Malware Scanning
  • Managing Programs that Load during System Start-up
  • Perform Memory Tests
Frequent Registry Cleaning
Registry cleaning can be termed as the most important activity that enables you to enhance the performance of a slow computer by removing unwanted information from the registry. All Windows operating systems (OS), such as Windows XP and Windows Vista, store all hardware and software configuration information in the registry, which is a centralized hierarchal database. The registry also stores information related to system settings and user preferences.
Over time, the registry starts expanding at an uncontrollable speed and may eventually get damaged and fragmented. This affects the data access time and generates several Windows Vista errors. You may often face unexpected system breakdowns if the problem intensifies.
However, it is possible to prevent this situation by regularly scanning and removing redundant, outdated, and invalid information from the registry. You can perform these tasks by using a dependable, professional and user-friendly registry cleaner utility that meets your specific requirements.
Cleaning And Defragging Hard Drive
As unwanted cookies and programs keep adding to the hard disk of your computer, it often gets congested. If you want to fix slow computer problems, you must use the Disk Cleanup utility provided in your Windows XP and Vista ‘System Tools.’ This will enable you to remove unnecessary junk and free up space on your hard disk. Additionally, you must use a Disk Defragmenter tool to defragment your system’s hard disk and make your data files contiguous and Windows programs easily accessible on your PC. These tools help you in your PC optimizing activities to a great extent.
Regular Malware Scanning
Many malicious programs, such as adware, spyware, Trojans, viruses, and worms may drastically affect the speed of your computer. They keep adding malicious files and registry entries, which may cause multiple system errors that can damage your computer and data files. Therefore, you must identify and delete these files by using a reliable anti-virus and anti-spyware tool.
Managing Programs that Load at System Start-up
Many processes are automatically added and start loading themselves when you start your computer. These start-up programs continue running in the background and slow down your system. If you want to avoid system slow-down, you must deactivate un-required programs from loading at the system start-up. This can be done by using the System Configuration utility (msconfig). You can also use an advanced Registry cleaning tool to manage start-up programs.


Performing Memory Tests
System slowdowns may also occur due to malfunctioning RAM or memory chips. In order to identify the cause, you may perform a memory test with the help of a memory test tool. There are several memory testing tools available on the Internet and you can search for and download a tool from a reliable source. Next, use this tool to run diagnostic tests on memory modules on your system, detect the malfunctioning module, and then replace it.
Find more information about the Windows regsitry and improving your computer performance at Instant-Registry-Fixes.org.
good luck

IP Address Versions

IP version 6 addresses
The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the Internet's addressing capability. The permanent solution was deemed to be a redesign of the Internet Protocol itself. This next generation of the Internet Protocol, aimed to replace IPv4 on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995[3][4] The address size was increased from 32 to 128 bits or 16 octets, which, even with a generous assignment of network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address space provides the potential for a maximum of 2128, or about 3.403 × 1038 unique addresses.

The new design is not based on the goal to provide a sufficient quantity of addresses alone, but rather to allow efficient aggregation of subnet routing prefixes to occur at routing nodes. As a result, routing table sizes are smaller, and the smallest possible individual allocation is a subnet for 264 hosts, which is the size of the square of the size of the entire IPv4 Internet. At these levels, actual address utilization rates will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment—that is the local administration of the segment's available space—from the addressing prefix used to route external traffic for a network. IPv6 has facilities that automatically change the routing prefix of entire networks should the global connectivity or the routing policy change without requiring internal redesign or renumbering.

The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is not the need to have complex address conservation methods as used in classless inter-domain routing (CIDR).

All modern[update] desktop and enterprise server operating systems include native support for the IPv6 protocol, but it is not yet widely deployed in other devices, such as home networking routers, voice over Internet Protocol (VoIP) and multimedia equipment, and network peripherals.

Example of an IPv6 address:

2001:0db8:85a3:08d3:1319:8a2e:0370:7334




IPv6 private addresses

Just as IPv4 reserves addresses for private or internal networks, there are blocks of addresses set aside in IPv6 for private addresses. In IPv6, these are referred to as unique local addresses (ULA). RFC 4193 sets aside the routing prefix fc00::/7 for this block which is divided into two /8 blocks with different implied policies (cf. IPv6) The addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if sites merge or packets are misrouted.

Early designs (RFC 3513) used a different block for this purpose (fec0::), dubbed site-local addresses. However, the definition of what constituted sites remained unclear and the poorly defined addressing policy created ambiguities for routing. The address range specification was abandoned and must no longer be used in new systems.

Addresses starting with fe80: — called link-local addresses — are assigned only in the local link area. The addresses are generated usually automatically by the operating system's IP layer for each network interface. This provides instant automatic network connectivity for any IPv6 host and means that if several hosts connect to a common hub or switch, they have an instant communication path via their link-local IPv6 address. This feature is used extensively, and invisibly to most users, in the lower layers of IPv6 network administration (cf. Neighbor Discovery Protocol).

None of the private address prefixes may be routed in the public Internet.

Hibernate your computer for a faster start up

Windows XP takes a lot of time to start, If you hate waiting like me then you must consider using hibernation option instead of turnoff for a faster startup.


To do this,

1.Hold down the “shift” key in the shutdown dialog to change the “Stand By” to “Hibernate“.

2. Or You can just press H to hibernate instantly.

To enable hibernation go to control panel and click on power options and click on hibernation tab and check the enable hibernation box. To enable hibernation you should have sufficient free space in C drive. Check the screenshot below if you have any problems




You can even use the Power Control Panel to configure your power button to hibernate. To do this click on the advanced tab and modify power buttons options according to your convenience.( Check the screen shot below)





We shall see how to use Microsoft bootvis to reduce the startup time of your computer in next post. Subscribe to our feed to stay connected.

Enable Multiple Concurrent Remote Desktop Connections in Windows XP SP2

All Windows XP machine has Remote Desktop service that allows the computer to be remotely connected, accessed and controlled from another computer or host. However, all windows XP machine except Windows XP Professional and Windows XP Media Center Edition only allows one concurrent remote desktop connection from a single user. But Windows XP Professional and Windows XP Media Center Edition will allow multiple remote desktop sessions or multiple concurrent remote desktop connections from a single user.
Whenever a remote user client tries to connect to a Windows XP host, the local user will disconnect with the local console screen locking them, with or without his or her permission.
Here’s a hack to unlock the single user limitation and enable multiple concurrent remote desktop connection session support in Windows XP SP2 .So that unlimited users can simultaneously connect to a computers same user via Remote Desktop. Follow the steps below:
  1. Download ConcurrentRemoteDesk.rar (To download click on the link) and extract the file.
  2. Restart the computer and boot in Safe Mode by pressing F8 during initial boot up and by selecting Safe Mode. (This step is only required if you’re currently running Windows Terminal Services or Remote Desktop service, and System File Protection has to be skipped and bypassed, else it will prompt error).
  3. Go to %windir%\System32 %windir%\System32\dllcacheand make a backup copy of termsrv.dll in both folders.
  4. Copy the downloaded termsrv.dll into %windir%\System32,%windir%\ServicePackFiles\i386 (if exist), %windir%\System32\dllcache
  5. Then run the downloaded Multiremotedesk.reg to merge the registry value into registry or you can add it manually
    [HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Terminal Server\Licensing Core]
    “EnableConcurrentSessions”=dword:00000001 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
    “EnableConcurrentSessions”=dword:00000001
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
    “AllowMultipleTSSessions”=dword:00000001
  6. Click on Start Menu -> Run command and type gpedit.msc and press Enter to open up the Group Policy Editor.
  7. Navigate to Computer Configuration -> Administrative Templates -> Windows Components -> Terminal Services.
  8. Enable Limit Number of Connections and set the number of connections to 3 (or more). This setting will allow more than one users to use the computer.
  9. Ensure the Remote Desktop is enabled in System Properties -> Remote tab by selecting the radio button for Allow users to connect remotely to this computer.
  10. Enable and turn on Fast User Switching in Control Panel -> User Accounts -> Change the way users log on or off.
  11. Restart the computer normally.
To uninstall, revert back to original termsrv.dll .You probably have to do it in Safe Mode if the Terminal Services is enabled and running.

Overcome Autom Reboot Problem in windows XP

        Many People Might have faced , cases when a system fault/error/crash ends up freezing the OS at the dreaded BSOD (Blue Screen Of Death), which displays the cause of the crash and gives some details about the state of the system when it crashed. The major annoyance is that it requires a “cold” reboot (reset) or complete power shut down, to remind you why those 2 buttons on the front of your PC case are meant for. And moreover, if you are a system administrator, requiring your server(s) to run non-stop 24/7, this can be a pain in Your head But have no fear, the fix is here… This Registry hack is valid for ALL NT, 2000, XP and 2003 releases. To bypass the BSOD altogether and enable the instant “Auto Reboot” feature, run Regedit and go to:


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl Right-click on the “AutoReboot” DWORD [REG_DWORD] Value in the right hand pane -> select Modify -> change it to read 1 (Auto Reboot enabled) -> click OK -> close the Registry Editor. 


Restart Windows for the change to take effect. From now on the OS will reboot upon locking up, right after writing to the crash log file (if enabled). To disable it, change the “AutoReboot” value back to 0 (default).

Operating System

      An operating system (OS) is an interface between hardware and user which is responsible for the management and coordination of activities and the sharing of the resources of the computer that acts as a host for computing applications run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type. Some of the oldest models may, however, use an embedded operating system that may be contained on a data storage device.


Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface (SUI) like typing commands by using command line interface (CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. (Whether the user interface should be included as part of the operating system is a point of contention.)
Common contemporary operating systems include BSD, Darwin (Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7). While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems, although the Microsoft Windows line of operating systems has almost 90% of the client PC market.

From Wikipedia


History of the Hard Drives

The hard disk drive was invented by some IBM engineers working under Rey Johnson at IBM in San Jose, CA, in about 1952 to 1954. I worked at IBM from 1965 to 1981 and got to meet and work with some of those men - Rey Johnson, John Lynott, Don Cronquist, Bob Schneider and Lou Stevens come to mind right now.
In 1965 (I think that was the year) a number of engineers left IBM (they were known as the "dirty dozen" within IBM) and founded Memorex. Al Shugart, one of them, later left Memorex and founded first Shugart Associates where the 5 1/4" floppy disk drive was a major product, then Seagate Technology, which effectively started today's industry of small hard disk drives.
The early drives almost all had linear actuators, that is, they moved the heads across the disks in a straight line, using a carriage with wheels. It was only later that rotary actuators, where the heads are held at the tips of a comb-like array and they swing back and forth like a gate, became popular. Because the rotary actuator is cheaper, it's now the standard for all hard disk drives, and that's what I'll be talking about.
The first IBM RAMAC disk drive had a couple of dozen disks, each about 2 feet in diameter, and ONE head! The head was moved from disk to disk and back and forth on each disk using a system of cables and pulleys and stepping motors. The added speed of having at least one head for each disk surface, and of using both surfaces of each disk, soon became obvious, and drives began to look pretty "modern" by 1960, although they were vastly larger and more expensive. Whether the heads are moved in a straight line or swung in an arc, something has to provide the force and the control to move them and keep them in the right place.
Stepping motors, hydraulic actuators and voice coil motors have been used to provide the motive force. Stepping motors have a built-in capability to hold in one position. Hydraulic actuators and voice coil motors (VCMs) provide force, but can't hold a position with great accuracy. A rack with detent pawls has been used, but nowadays a servo system is used, with the positioning information recorded on the disks.
So you can have a disk drive with a stepping motor and you don't need a "servo" or you can have a disk drive with "servo data" recorded on the disks. A stepping motor is simpler (at least in concept) and cheaper, but it's slower in seek time because it isn't really very powerful, and it isn't capable of really, really fine precision. A VCM can provide enormous forces, but it needs control, which the servo system provides. However servo feedback systems are complicated and you have to pre-record the positioning data somehow (usually on one or more of the disks), and that takes up space that could be used for "real" data. On the other hand, servo systems can provide incredibly precise positioning.
The style of hard disk drive we use today began to emerge in the early 1980s. I think it was Maxtor, under Frank Gibeau, where the first high-volume 5 1/4" disk drives with a rotary actuator, a VCM and a servo system were produced.
In 1986, Finis Conner left Seagate and founded Conner Peripherals along with John Squires, and they built the first high-volume 3 1/2" disk drives. The first one, 40 MB, was called the "Fat 40". Not only did they popularize the new smaller "form factor", but they were the first to have an "embedded servo" or "sector servo" in volume.
Meanwhile, Quantum Corporation had been building 8" and 5 1/4" disk drives since 1980, and in the mid 1980's they (actually I think it was Joel Harrison and Bill Moon) saw an opportunity with the 3 1/2" form factor and invented the "hard card", a disk drive on an expansion card that you could just plug into your AT. And that's how the IDE interface got started.
By the way, Quantum used a rather odd variant of the servo system for many years, where the servo information was actually very fine lines etched on a piece of glass attached to the actuator, and read with a photocell. It's actually more complicated, but that's a subject for another discussion.
Around 1990, laptops began to appear, and with them came the 2 1/2" form factor. I never worked closely with 2 1/2" drives, and I'm not very well versed in their historical development.
Way back in the old days, when the world was young and all, , a single disk surface was reserved for "servo data" on disk drives that had a voice-coil actuator. Drives with stepper motors didn't need that stuff, and could justly claim that they didn't have to "waste" disk surface on "servo data".
Servo data, by the way, is information that's pre-recorded on the disk and specially formatted to make it possible for the drive to know where its heads are positioned. A stepper motor has, if no errors have occurred, the equivalent information built into its mechanical structure.
But during the late 1970s and early 1980s, techniques were developed that allowed the servo data to be written on the same surfaces that hold the regular user data. There were several schemes proposed and actually implemented, but the one that has taken hold is called "sector servo", where some number of regions on each data track are specially reserved for servo information. Because the sectors are physically coherent on each disk surface, they're commonly called "servo spokes". I've seen drives with as few as 64 spokes per revolution and as many as 128.
I believe the first major manufacturer to use sector servo was Conner--sector servo and the 3 1/2" form factor were their keys to success when they started in (I think) 1986. [At the time, I was one of the majority of old disk drive people who thought they were headed for disaster. Later, I worked there. Shows how smart I am!]
While there may still be drives manufactured with a dedicated servo surface, I think the last major manufacturer to use one was Seagate, up until a couple of years ago. The first Barracuda drive had a dedicated servo surface. Later Barracudas, and I think all current Seagates, use sector servo technology.

Disk drive production has become easier nowadays - there are even cheap hard drive deals which indicate cost-efficient manufacturing of these devices.
www.logicsmith.com

Measurements of Data Speed

Today there are generally 2 ways of describing data transfer speeds: in bits per second, or in bytes per second. As explained above, a byte is made of 8 bits. Network engineers still describe network speeds in bits per second, while your internet browser would usually measure a file download rate in bytes per second. A lowercase "b" usually means a bit, while an uppercase "B" represents a byte.

Bps 

Known as bits per second, bps was the main way of describing data transfer speeds several decades ago. Bps was also known as the baud rate, therefore, a 600 baud modem was one which could transfer data at around 600bps.

Kbps 

kilobits per second, or 1000 bits per second.

Mbps

1,000,000 bits per second (usually used in describing internet download/upload speeds).

Gbps

1,000,000 kilobits per second or 1,000,000,000 bits per second. This term is most commonly heard in local area networks, where the close proximity of machines allows for lightning fast data transfer rates.

Names for different sizes of data

When choosing a new computer we come across terms such as "300GB hard drive" and "500MB download", and to the uninitiated, this can be somewhat disconcerting. Data in a computer is represented in a series of bits. Since the birth of computers, bits have been the language that control the processes that take place inside that mysterious black box called your computer. In this article, we look at the very language that your computer uses to do its work.

Bit 

A bit is simply a 1 or a 0. A true or a false. It is the most basic unit of data in a computer. It's like the dots and dashes in Morse code for a computer. It's also called machine language.

Byte 

In computer science a byte is a unit of measurement of information storage, that equals '8 bits', can be used to represent letters and numbers. For example, the number 01000001 is 8 bits long, and represents the letter A in ASCII.

kB 

A kB is a unit of data that equals 1024 bytes. This is because 8 bytes cannot contribute into 1000.

MB 

Megabyte is 1024kB squared, 10242

GB

A gigabyte is a unit of data storage worth a billion bytes meaning either exactly 1 billion bytes (10243) or approximately 1.07 billion bytes. More often than not in advertising, Gigabytes are presented as 1 billion bytes and not 10243 (read the fine print in your adverts!). This explains why a freshly formatted 500GB hard drive shows up at a 450GB one instead. Not too long ago many people were discussing storage in Megabytes. These days, storage has become so cheap that having Gigabytes is considered the norm.

TB 

A terabyte is 10244 and is defined as about one trillion bytes, or 1024 gigabytes. Data centres such as those operated by Google handle thousands if not millions of terabytes of data each day. As storage becomes cheaper and faster, terabytes are becoming a commonly heard term.

PB 

 A petabyte is a unit of information or computer storage equal to one quadrillion bytes (10245). Microsoft stores on 900 servers a total of approximately 14 petabytes.

Chat Room





Software


Software, commonly known as programs, consists of all the electronic instructions that tell the hardware how to perform a task. These instructions come from a software developer in the form that will be accepted by the operating system that they are based on. For example, a program that is designed for the windows operating system will only work for that operating system. Compatibility of software will vary as the design of the software and the operating system differ. A software that is designed for Windows XP may experience compatibility issue when running under Windows 2000 or NT.
Software can also be described as a collection of routines, rules and symbolic languages that direct the functioning of the hardware.
Software is capable of performing specific tasks, as opposed to hardware which only perform mechanical tasks that they are mechanically designed for. Practical computer systems divide software systems into three major classes:
  1. System software: Helps run computer hardware and computer system. Computer software includes operating systems, device drivers, diagnostic tools and more.
  2. Programming software: Software that assists a programmer in writing computer programs.
  3. Application software: Allows users to accomplish one or more tasks.
The term "software" is sometimes used in a broader context to describe any electronic media content which embodies expressions of ideas such as film, tapes, records, etc. Software is the electronic instruction that tells the computer to do a task.

Hardware

The hardware are the parts of computer itself including the Central Processing Unit (CPU) and related microchips and micro-circuitry, keyboards, monitors, case and drives (hard, CD, DVD, floppy, optical, tape, etc...). Other extra parts called peripheral components or devices include mouse, printers, modems, scanners, digital cameras and cards (sound, colour, video) etc... Together they are often referred to as a personal computer.
Central Processing Unit - Though the term relates to a specific chip or the processor a CPU's performance is determined by the rest of the computer's circuitry and chips.
Currently the Pentium chip or processor, made by Intel, is the most common CPU though there are many other companies that produce processors for personal computers. Examples are the CPU made by Motorola and AMD.


Chip



With faster processors the clock speed becomes more important. Compared to some of the first computers which operated at below 30 megahertz (MHz) the Pentium chips began at 75 MHz in the late 1990's. Speeds now exceed 3000+ MHz or 3 gigahertz (GHz) and different chip manufacturers use different measuring standards (check your local computer store for the latest speed). It depends on the circuit board that the chip is housed in, or the motherboard, as to whether you are able to upgrade to a faster chip. The motherboard contains the circuitry and connections that allow the various component to communicate with each other.
Though there were many computers using many different processors previous to this I call the 80286 processor the advent of home computers as these were the processors that made computers available for the average person. Using a processor before the 286 involved learning a proprietary system and software. Most new software are being developed for the newest and fastest processors so it can be difficult to use an older computer system.
Keyboard - The keyboard is used to type information into the computer or input information. There are many different keyboard layouts and sizes with the most common for Latin based languages being the QWERTY layout (named for the first 6 keys). The standard keyboard has 101 keys. Notebooks have embedded keys accessible by special keys or by pressing key combinations (CTRL or Command and P for example). Ergonomically designed keyboards are designed to make typing easier. Hand held devices have various and different keyboard configurations and touch screens.
Some of the keys have a special use. There are referred to as command keys. The 3 most common are the Control or CTRL, Alternate or Alt and the Shift keys though there can be more (the Windows key for example or the Command key). Each key on a standard keyboard has one or two characters. Press the key to get the lower character and hold Shift to get the upper.
Removable Storage and/or Disk Drives - All disk need a drive to get information off - or read - and put information on the disk - or write. Each drive is designed for a specific type of disk whether it is a CD, DVD, hard disk or floppy. Often the term 'disk' and 'drive' are used to describe the same thing but it helps to understand that the disk is the storage device which contains computer files - or software - and the drive is the mechanism that runs the disk.Mouse
Digital flash drives work slightly differently as they use memory cards to store information so there are no moving parts. Digital cameras also use Flash memory cards to store information, in this case photographs. Hand held devices use digital drives and many also use memory cards.
Mouse - Most modern computers today are run using a mouse controlled pointer. Generally if the mouse has two buttons the left one is used to select objects and text and the right one is used to access menus. If the mouse has one button (Mac for instance) it controls all the activity and a mouse with a third button can be used by specific software programs.
One type of mouse has a round ball under the bottom of the mouse that rolls and turns two wheels which control the direction of the pointer on the screen. Another type of mouse uses an optical system to track the movement of the mouse. Laptop computers use touch pads, buttons and other devices to control the pointer. Hand helds use a combination of devices to control the pointer, including touch screens.

>> Note: It is important to clean the mouse periodically, particularly if it becomes sluggish. A ball type mouse has a small circular panel that can be opened, allowing you to remove the ball. Lint can be removed carefully with a tooth pick or tweezers and the ball can be washed with mild detergent. A build up will accumulate on the small wheels in the mouse. Use a small instrument or finger nail to scrape it off taking care not to scratch the wheels. Track balls can be cleaned much like a mouse and touch-pad can be wiped with a clean, damp cloth. An optical mouse can accumulate material from the surface that it is in contact with which can be removed with a finger nail or small instrument.
Monitors - The monitor shows information on the screen when you type. This is called outputting information. When the computer needs more information it will display a message on the screen, usually through a dialog box. Monitors come in many types and sizes. The resolution of the monitor determines the sharpness of the screen. The resolution can be adjusted to control the screen's display..
Most desktop computers use a monitor with a cathode tube or liquid crystal display. Most notebooks use a liquid crystal display monitor.

To get the full benefit of today's software with full colour graphics and animation, computers need a color monitor with a display or graphics card.
Printers - The printer takes the information on your screen and transfers it to paper or a hard copy. There are many different types of printers with various levels of quality. The three basic types of printer are; dot matrix, inkjet, and laser.
  • Dot matrix printers work like a typewriter transferring ink from a ribbon to paper with a series or 'matrix' of tiny pins.
  • Ink jet printers work like dot matrix printers but fires a stream of ink from a cartridge directly onto the paper.
  • Laser printers use the same technology as a photocopier using heat to transfer toner onto paper.
Modem - A modem is used to translate information transferred through telephone lines, cable or line-of-site wireless.
The term stands for modulate and demodulate which changes the signal from digital, which computers use, to analog, which telephones use and then back again. Digital modems transfer digital information directly without changing to analog.
Modems are measured by the speed that the information is transferred. The measuring tool is called the baud rate. Originally modems worked at speeds below 2400 baud but today analog speeds of 56,000 are standard. Cable, wireless or digital subscriber lines can transfer information much faster with rates of 300,000 baud and up.
Modems also use Error Correction which corrects for transmission errors by constantly checking whether the information was received properly or not and Compression which allows for faster data transfer rates. Information is transferred in packets. Each packet is checked for errors and is re-sent if there is an error.
Anyone who has used the Internet has noticed that at times the information travels at different speeds. Depending on the amount of information that is being transferred, the information will arrive at it's destination at different times. The amount of information that can travel through a line is limited. This limit is called bandwidth.
There are many more variables involved in communication technology using computers, much of which is covered in the section on the Internet.
Scanners- Scanners allow you to transfer pictures and photographs to your computer. A scanner 'scans' the image from the top to the bottom, one line at a time and transfers it to the computer as a series of bits or a bitmap. You can then take that image and use it in a paint program, send it out as a fax or print it. With optional Optical Character Recognition (OCR) software you can convert printed documents such as newspaper articles to text that can be used in your word processor. Most scanners use TWAIN software that makes the scanner accessable by other software applications.
Digital cameras allow you to take digital photographs. The images are stored on a memory chip or disk that can be transferred to your computer. Some cameras can also capture sound and video.
Case - The case houses the microchips and circuitry that run the computer. Desktop models usually sit under the monitor and tower models beside. They come in many sizes, including desktop, mini, midi, and full tower. There is usually room inside to expand or add components at a later time. By removing the cover off the case you may find plate covered, empty slots that allow you to add cards. There are various types of slots including IDE, ASI, USB, PCI and Firewire slots.
Depending on the type notebook computers may have room to expand . Most Notebooks also have connections or ports that allows expansion or connection to exterior, peripheral devices such as monitor, portable hard-drives or other devices.
Cards - Cards are components added to computers to increase their capability. When adding a peripheral device make sure that your computer has a slot of the type needed by the device. 
Sound cards allow computers to produce sound like music and voice. The older sound cards were 8 bit then 16 bit then 32 bit. Though the human ear can't distinguish the fine difference between sounds produced by the more powerful sound card they allow for more complex music and music production. 
Colour cards allow computers to produce colour (with a colour monitor of course). The first colour cards were 2 bit which produced 4 colours [CGA]. It was amazing what could be done with those 4 colours. Next came 4 bit allowing for 16 [EGA and VGA ] colours. Then came 16 bit allowing for 1064 colours and then 24 bit which allows for almost 17 million colours and now 32 bit and higher allow monitors to display almost a billion separate colours. 
Video cards allow computers to display video and animation. Some video cards allow computers to display television as well as capture frames from video. A video card with a digital video camera allows computers users to produce live video. A high speed connection is required for effective video transmission. 
Network cards allow computers to connect together to communicate with each other. Network cards have connections for cable, thin wire or wireless networks. For more information see the section on Networks.
Cables connect internal components to the Motherboard, which is a board with series of electronic path ways and connections allowing the CPU to communicate with the other components of the computer.
Memory - Memory can be very confusing but is usually one of the easiest pieces of hardware to add to your computer. It is common to confuse chip memory with disk storage. An example of the difference between memory and storage would be the difference between a table where the actual work is done (memory) and a filing cabinet where the finished product is stored (disk). To add a bit more confusion, the computer's hard disk can be used as temporary memory when the program needs more than the chips can provide.
Random Access Memory or RAM is the memory that the computer uses to temporarily store the information as it is being processed. The more information being processed the more RAM the computer needs.
One of the first home computers used 64 kilobytes of RAM memory (Commodore 64). Today's modern computers need a minimum of 64 Mb (recommended 128 Mb or more) to run Windows or OS 10 with modern software.
RAM memory chips come in many different sizes and speeds and can usually be expanded. Older computers came with 512 Kb of memory which could be expanded to a maximum of 640 Kb. In most modern computers the memory can be expanded by adding or replacing the memory chips depending on the processor you have and the type of memory your computer uses. Memory chips range in size from 1 Mb to 4 Gb. As computer technology changes the type of memory changes as well making old memory chips obsolete. Check your computer manual to find out what kind of memory your computer uses before purchasing new memory chips.
http://www.grassrootsdesign.com/intro/hardware.php

Computer types

Supercomputer

Supercomputers are fast because they're really many computers working together.

Supercomputers were introduced in the 1960's as the worlds most advanced computer. These computers were used for intense calculations such as weather forecasting and quantum physics. Today, supercomputers are one of a kind, fast, and very advanced. The term supercomputer is always evolving where tomorrow's normal computers are today's supercomputer. As of November 2008, the fastest supercomputer is the IBM Roadrunner. It has a theoretical processing peak of 1.71 petaflops and has currently peaked at 1.456 petaflops.





Mainframe

Mainframes are computers where all the processing is done centrally, and the user terminals are called "dumb terminals" since they only input and output (and do not process).
Mainframes are computers used mainly by large organizations for critical applications, typically bulk data processing such as census. Examples: banks, airlines, insurance companies, and colleges.





Workstation


Workstations are high-end, expensive computers that are made for more complex procedures and are intended for one user at a time. Some of the complex procedures consist of science, math and engineering calculations and are useful for computer design and manufacturing. Workstations are sometimes improperly named for marketing reasons. Real workstations are not usually sold in retail.
The movie Toy Story was made on a set of Sun (Sparc) workstations.
Perhaps the first computer that might qualify as a "workstation" was the IBM 1620.


The Personal Computer or PC


PC is an abbreviation for a Personal Computer, it is also known as a Microcomputer. Its physical characteristics and low cost are appealing and useful for its users. The capabilities of a personal computer have changed greatly since the introduction of electronic computers. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single individual. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, lead to the proliferation of personal computers after about 1975. Early personal computers generally called microcomputers, sold often in kit form and in limited volumes and were of interest mostly to hobbyists and technicians. By the late 1970s, mass-market pre-assembled computers allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware. Throughout the 1970s and 1980s, home computers were developed for household use, offering some personal productivity, programming and games, while somewhat larger and more expensive systems (although still low-cost compared with minicomputers and mainframes) were aimed for office and small business use.
Today a personal computer is an all rounded device that can be used as a productivity tool, a media server and a gaming machine. The modular construction of the personal computer allows components to be easily swapped out when broken or upgraded.



Microcontroller


Microcontrollers are mini computers that enable the user to store data, do simple commands and tasks, with little or no user interaction with the processor. These single circuit devices have minimal memory and program length but can be integrated with other processors for more complex functionality. Many such systems are known as Embedded Systems. Examples of embedded systems include Smartphones or car safety systems.
Microcontrollers are important, they are used everyday in devices such as appliances and automobiles.



Server


Similar to mainframes in that they serve many uses with the main difference that the users (called clients) do their own processing usually. The server processes are devoted to sharing files and managing log on rights.
A server is a central computer that contains collections of data and programs. Also called a network server, this system allows all connected users to share and store electronic data and applications. Two important types of servers are file servers and application servers.


From   wikiversity.org













History Of Computers

Computers were initially large machines that could fill entire rooms. Some were operated using large vacuum tubes that formed the basis of todays transistors. In order to operate such machines, punch cards were used. One of the first such examples of this was the Jacquard Loom.



                                                                      Jacquard Loom
In 1833 Charles Babbage invented his difference engine, an early calculator.


Together with the punch card design, he created the analytical engine. Regrettably the engine never saw completion due to political issues.

Over time computers became more and more powerful, with the introduction of the ubiquitous microprocessor driving forward development. Gordon Moore, one of the co-founders of Intel, invented Moores law, which predicted that the number of transistors that could be placed on an integrated circuit inexpensively doubled every 2 years. This law has held true to a certain degree, and it can be seen in motion every day with the introduction of more and more powerful microprocessors and larger hard drives and memory modules.


A Computer-based system

A Computer-based system is a system in which a computer is involved and consists of three major elements: Hardware, Software, and User. The elements of a computer based system are described in the three following scenarios:
1. Registration in a University

Hardware = Micro computers, Network platform, and a Server Computer
Software = Student Registration Application, Database, and Operating System
User = Operators, Administrators

2. Controlling a section of an Assembly Line

Hardware = A specially embedded system which is developed for this purpose
Software = The machine code Loaded on the Embedded system Memory
User = Other Machine, Supervisor

3. Playing a game with a Computer

Hardware = Game Console such as XBox, Playstation
Software = The Game itself
User = The little kid

Prepared by Seid ICT Sales & Service Somaliland


History of JAVA Programming Language

James Gosling initiated the Java language project in June 1991 for use in one of his many set-top box projects. The language, initially called Oak after an oak tree that stood outside Gosling's office, also went by the name Green and ended up later renamed as Java, from a list of random words. Gosling aimed to implement a virtual machine and a language that had a familiar C/C++ style of notation.

Sun released the first public implementation as Java 1.0 in 1995. It promised "Write Once, Run Anywhere" (WORA), providing no-cost run-times on popular platforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular. With the advent of Java 2 (released initially as J2SE 1.2 in December 1998), new versions had multiple configurations built for different types of platforms. For example, J2EE targeted enterprise applications and the greatly stripped-down version J2ME for mobile applications. J2SE designated the Standard Edition. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, and Java SE, respectively.

In 1997, Sun Microsystems approached the ISO/IEC JTC1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process. Java remains a de facto standard, controlled through the Java Community Process. At one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System. Sun distinguishes between its Software Development Kit (SDK) and Runtime Environment (JRE) (a subset of the SDK); the primary distinction involves the JRE's lack of the compiler, utility programs, and header files.

On 13 November 2006, Sun released much of Java as free and open source software under the terms of the GNU General Public License (GPL). On 8 May 2007 Sun finished the process, making all of Java's core code available under free software / open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright.