วันศุกร์ที่ 25 กรกฎาคม พ.ศ. 2551

Evolution of Moore's Law

Moore's Law has become one of the most reliable constants in an increasingly technological world. According to Forbes.com, there are companies that gamble their entire futures on Moore's Law, simply because it promises better performance per dollar per year, at least in theory. Originally, Moore's Law stated that the number of electronic components that fit on a silicon chip will double every 12 months. But it's been revised several times over its 40-year existence, so that now it is associated primarily with the number of transistors that fit on a chip, and the timeframe has crept up to 24 months. Moore himself said that his Law can't continue forever, and that eventually the exponential growth rate will end in disaster.

In an interview with Forbes.com, Bernard Meyerson, a chief technologist for IBM, said that people have been misinterpreting Moore's Law for years, and will continue to do so well into the future. According to Meyerson, the core tenet of Moore's Law is that you can double the amount of stuff on a chip in a certain amount of time, which has been interpreted as doubling the amount of stuff in a unit area. He believes that this view is short-sighted, as it doesn't account for vertical integration and chip stacks, which means that while the physical area remains the same, the size actually doubles.

In fact, Meyerson envisages a world far more complex than vertical stacks. When the Forbes interviewer asked where he saw the industry in five years time, he painted a picture of planes of super high-density memory above planes of logic, of multiple cores in a single level, and reconfiguring the wiring between chips stacked on one another. In other words, he saw the future in 3-D, which would add density but reduce costs, all while optimising the chips' performance at a fraction of the energy that is currently required.

He also predicted major advancements in integrated optics and the use of optical signals, as well as light. Optics and light both move at a significantly greater speed than electricity, and are not hindered by factors such as resistive capacity delay. Light would also be able to communicate data very quickly while using much less power. Transmission by light is also far simpler and less prone to distortion than current signals, which makes it ideal for use in data centres where people need to transmit data over large distances relatively quickly.

Meyerson cautioned, however, that none of these possibilities makes the development and manufacture of chips any easier. He said that integrated optics and 3-D stacking are as technically challenging as any systems used today. He also added that stacking chips vertically is merely a convenient "fix" that gets the job done, but is not a permanent solution to the problems inherent in the evolution of microchips and computing. Each new "fix" is more difficult to come up with than the one before it, as problems become increasingly complex and begin to defy the laws of physics. It's only a matter of time before "fixes" cease to work, and we have to turn to technology that's completely new and different. Meyerson said that if we haven't started investigating these new technologies by now, it may already be too late.

Recommended sites:

http://www.forbes.com/technology/2008/06/09/ibm-moores-law-tech-cionetwork-cx_es_0609ibm.html?feed=rss_technology

http://www.techworld.com/opsys/news/index.cfm?newsid=3477

Sandra wrote this article for the online marketers Star Business Internet internet service provider and website hosting one of the leading Internet service companies specialising in business website hosting in the UK

Article Source: http://EzineArticles.com/?expert=Sandy_Cosser

The Invention of Ethernet - Robert Metcalfe, David Reeves Boggs and PARC

Without the Ethernet modern commerce couldn't function. Our modern economy is heavily reliant on the fast, smooth transfer of data. I find it odd that the inventors of the networking system that allows this to occur are only known to computer geeks. Robert Metcalfe and David Boggs work on the Ethernet makes them as important to the 21st Century as the likes of Gutenburg where to the 15th Century.

The Ethernet is the system that allows computers to communicate with each other and devices such as printers and scanners. The computers are connected via a Local Area Network (LAN) via Ethernet cables and hubs. The network is closed to those who are not connected to the LAN. Ethernet differs from the internet that is an open network that uses telephone and broadband lands for the transfer of data.

Metcalfe and Boggs worked together at PARC (Palo Alto Research Center) the research centre for Xerox. Metcalfe came from and engineering and management background, Boggs from an Electrical Engineering and radio background. Whist working at PARC Metcalfe was charged with creating a system that would allow all the users to print to the new laser printer that Xerox had developed.

The idea of the Ethernet was first aired in a memo that Metcalfe sent out to Xerox staff in May 1973. The system was up and functioning by November 1973. It can be claimed that the two of them co-invented the Ethernet. Metcalfe came up with the idea and the blueprint, Boggs sorting out how to build the system.

The pair went on to build a number of Ethernet interfaces for Xerox and later published Ethernet: Distributed Packet Switching for Local Computer Networks at the Communication of the ACM. This was the seminal paper on Ethernet.

Bogg later left PARC and formed LAN Media Corporation, Metcalfe left in 1979 and founded 3Com.

Tony Heywood ©

Fast Ethernet
Ethernet Solutions

Article Source: http://EzineArticles.com/?expert=Antony_Heywood

Smart Cards and Their Wide Applicability

Of late, digital cards have become a growing industry. They have immense uses in the present time and are used extensively in almost every domain of life. These electronic cards have gained popularity due to their wide application. They have made their presence felt in almost every field including payphones, banking and retail, communication, security control and many more. With the use of Internet and e commerce, smart cards have gained momentum and there are a plethora of usages.

In this day and age, it has become quite impossible to move out, shop or even eat without having a smart card. Digital cards have indeed become an integral part of our lives since they are easy to carry along and use. These portable cards have made life much easier than it used to be before. They serve as reliable identification cards and are absolutely safe to use. In this article, we will throw light on the prime uses of smart cards, which are mentioned below:

Payphones

There are several countries where payphones are well equipped with card readers. Their benefit to the phone companies is that they are not required to collect coins and also they get timely payment in the most convenient manner. People like to use their electronic phone cards at payphones, because they need not keep their eyes on their wallet every time to see if they have adequate money or not.

Banking and Retailing

Smart banking cards have multiple uses. They can be used as debit cards, credit cards or store value cards. Smart bank cards also serve as a good personal identification proof. There is an intelligent microchip on the smart card as well as on the card reader that secures the interest of users, merchants as well as bank. Of late, these cards have given impetus to loyalty programs.

Security Control

Every organization whether school or private firm needs some form of security control. Though there are several kinds of security control measures, but smart id cards are considered to be better than the rest, as they operate offline as well. A smart id card authenticates a person's identity and is extensively used in schools and offices nowadays. It also gives an access to individuals such as students in restricted areas. They can easily enter computer rooms by showing their smart id cards on the smart card reader.

Mobile Phone Communication

Smart cards have great use in the field of mobile phone communication. For GSM digital mobile phones, these cards make a wonderful identification device. They store every bit of information to bill the user. Also, these cards enable the user to make calls from any phone terminal.

Health Care Services

Smart health cards give accessibility to the patients' case history. Information is properly stored and can be accessed at any point of time. Health care professionals access patients' information, which is stored on their smart cards, and update the same in their official records. Smart medical cards also pave way for instant insurance processing. In fact, nowadays doctors and nurses also carry smart identification cards that enable them to do multi level information accessing.

E-commerce

Digital cards are widely used for performing a host of electronic commercial transactions. You can buy articles of your choice over the Internet. You can get the tickets booked via Internet by means of a smart card. You can order flowers, birthday gifts and do a lot more by using your smart card as a credit card or debit card. Smart cards have great use in service industries, as you can easily make payments online for services provided by the other party.

To conclude, smart card has become the buzzword in present times and almost every person has at least one type.

Check out ID Superstore for low prices on identification card printers. They also carry many other quality identification supplies, such as card printer supplies and badge printer ribbons.

Article Source: http://EzineArticles.com/?expert=Jack_Mathew

วันเสาร์ที่ 12 กรกฎาคม พ.ศ. 2551

Multitouch - The Technology of 2008

When was the last time you were amazed by a touch screen or touchpad that recognizes multiple simultaneous touch points. Its no magic, but the attributes of Multitouch which uses a software to interpret simultaneous touches. Elaborately speaking, Multi-touch is a human-computer interaction technique, which is implemented by the hardware devices which can frequently include the position and pressure of each touch point independently.

The journey of MultiTouch since its beginning in the year 1982, which initiated with multi-touch tablets and multi-touch screens, have since been extemely successful and fulfilling in terms of the technological revolution it has brought about.

Making a humble yet firm beginning in 2005 with the first commercially successful professional multi-media controller 'Lemur Input Device' with display using multi-touch technology, Multitouch has indeed come a long way since then.

With more innovation in the anvil, Multitouch technology has been widely accepted and its implemention in devices featuring multi-touch with more varied finger gesture options, such as in Microsoft's Surface technology, Apple's iPod touch and MacBook Air says it all. Apple is strategising its integration in future versions of MacBook and MacBook Pro notebooks too.

Technology experts believe that the year 2008 might be a turning point for Multitouch. Just as Apple completely transformed our vision of Multitouch technology with its breakthrough implementation in iPhone, there are indications that MacBook Pro is also set to get a of multitouch trackpad very soon.

Apple's latest report on Multitouch indicates the integration of the same multi touch trackpad as the one on the MacBook Air in the next version of MacBook Pro. Reports also mention the viability of the new MacBook Pro in comparison to its predecessors; where the new portables will be based on Intel's latest Penryn processors, which besides enhancing performance will also increase its battery life.tangent multitouch surface

The media is also enlightened with technology reports indicating N-trig's Multitouch Capabilities. N-trig, the provider of DuoSense™ technology is reported to have demonstrated its multitouch capabilities and hopefully will have it for OEM integration in DuoSense™ in May, 2008.

Now, what is it that makes people 'wow' at Multitouch? Most probably, it is the simplicity of technology and the new dimension of enhancing touch usability it has offered to users in interacting with the computer, that has added to it popularity. Just as N-trig refers to Multitouch as capacitive touch and not resistive touch, thereby, enabling the best touch experience in any input device for the personal computing industry.

The Author is webmaster/writer for few popular Technology & Finance blogs. HomePage: http://www.surfacerama.com

Article Source: http://EzineArticles.com/?expert=Raja_Chandran

Mozilla Firefox Tips & Tricks to Speed Things Up

* search Sites With Keywords: Go to any site with a search field, right click search box and select ADD a keyword for this search. The add book mark box will open. Give it a name and short keyword-'BAY' for eBay, for example. If you wanted to search for, say, Cd's on eBay you can now do so buy typing 'bay Cd's' into the FireFox address bar.
* Assign Keywords to Bookmarks: To speed up locating a bookmark, go to bookmarks/manage bookmarks, right click the one you want and select properties. Enter a short text string in the keyword field ('TIPS' for the Yoebo.com internet forum for example) and click OK. To access the site simply type the string ('TIPS') into the address bar and hit enter.
* Navigate Tabs: Press ctrl+tab to jump from tab to tab (left to right), or ctrl+shift+tab (right to left). When you get to the last tab it will jump back to the first one. Alternatively, you can press ctrl and a number that corresponds with the tab you want-so ctrl+3 will jump to the third tab.
* Type Quicker URL's: Type the name (not the address) of a site you want to visit in the address bar and press ctrl+enter. This will add http;//www. before the text and .com after it and automatically load the site. Shift+enter adds http;//www. and .net, and ctrl+shift+enter adds http;//www. and .org. If you hold down ALT at the same time it will load the site in a new tab.
* Quick Word Search: If you want to search for a word or phrase you found on a website, select the text then drag and drop it into the FireFox address bar. Alternatively, you can highlight the phrase, right-click it, and select search web for. If you set the search bar for Amazon or eBay you can use this method to quickly look up products. Add UK-specific sites to the search bar at MYCROFT.MOZDEV.ORG.
* Drop Down Bookmarks: You can bookmark multiple open tabs in a single folder by going to bookmarks, then bookmark all tabs. If you save this folder to the bookmarks toolbar, clicking on the link will display all the bookmarks in a drop down list. You can do the same with a folder created from existing bookmarks.
* Delete Addresses: Click on the down arrow in the address bar and you will see a list of recently accessed sites. To remove a particular site from the list, highlight it and press shift+delete.
* Get Instant Downloads: Right-click the navigation toolbar (above the address bar) and select customize. This will bring up a box containing icons. Drag and drop the downloads icon to the toolbar. Now whenever you want to download something simply drag the link to the button to begin.

This is just ONE of the amazing guru marketing tips Alfred Peters has provided for the Yoebo Internet Marketing Forum members.

You can INSTANTLY check out ALL of the secrets for FREE at The Home Business Forum

Article Source: http://EzineArticles.com/?expert=Alfred_Peters

Are We Getting Dumber As Technology Gets Smarter?

Recently, a leading UK business website revealed that they now have to spend over 25% of their online search marketing budget to cater for misspelt words, hinting towards a worrying trend in today's society and suggesting we may have become too reliant upon technology, while our language skills suffer as a result.

With over 50% of English school leavers failing to grasp even the most basic levels of spelling and grammar there is an argument for closer monitoring on the country's education system and how technology is impacting upon the country's young minds.

Perhaps the finest example of how the English language is being "dumbed down" can be seen through the ever popular SMS messaging service where there is a tendency to use abbreviated or phonetically spelt words to increase the speed of communication. This development is also evident in instant messaging conversations, with users of MSN or Yahoo! Messenger shortening their words using "expressions" and emoticons to communicate their message. With over 70% of Europe's online population using instant messaging (IM), this issue is not going to go away; speed, it seems, is more important in today's society, than the quality of the message itself.

As our children use the internet more - as opposed to libraries for their source of knowledge, they are recycling information found through search engines and new authority sites such as Wikipedia, the free online encyclopedia which is free for anyone to add content to.

If our young are using these portals as the oracles of truth, then any poor grammar or misspelled words found in such sources, such as the word mannequin, will not help the education of the children in the UK and can even have worldwide ramifications.

Also quite concerning is the growth in misspelt or abbreviated words being used in the naming of children. Recent research from Australia revealed an increase in the multiple spellings of names such as Aiden, which were found to be spelt in nine ways, and Amelia and Tahlia in eight ways. Errors such as this can not be easily corrected and parents of this future generation are damming their children to a life with a misspelt name because they wanted an individual looking name, or just couldn't be bothered to get it right.

Generations before us would have had language issues with their predecessors, however the staggering advancement in technology is having such a dramatic and potentially wide scale damaging effect on this generations' ability to communicate. We are creating a situation where the youth of today can communicate amongst themselves, but not with their grandparents.

We could well be facing a communication divide that we may find hard to bring together in the future.

Isla Campbell writes on a number of topics on behalf of a digital marketing agency and a variety of clients. As such, this article is to be considered a professional piece with business interests in mind.

Article Source: http://EzineArticles.com/?expert=Isla_Campbell

วันอังคารที่ 1 กรกฎาคม พ.ศ. 2551

9 Reasons Why Linux is For the Average User

Well, every Linux user worth his/her salt has a list of why Linux is perfect for [adjective] users, so I sat down and thought up a list for myself! 9 Reasons why Linux is for the average user. The reasons aren't in any order, just the order that I thought of them, so don't think that I place particular importance on the ones at the top, or that the ones at the bottom are a bit useless really. Anyway, read on to see why I think that Linux is perfect for the average user!

Secure

Linux is very secure by design, mostly due to the file permissions system, which prevents you from tampering with any files you don't have write access to. This more or less makes a Linux virus that does more than mess with your own files impossible to get as a normal user. If someone runs as root all the time, all the security in the world it useless - but not using root for everyday tasks is an important lessen taought in many distros.

The average user just wan't to use their computer, they don't want to fiddle around with getting it secured. Linux's inherent security is therefore a boon to them.

Stable

As long as you're not running beta, alpha, or software in any other stage of development other than "final", Linux is very stable. This is partly due to the separation of GUI and Kernel, which is not present in Windows. This means that if the GUI does freeze, the computer doesn't need to be rebooted - the GUI just needs to be killed and started again, with a keyboard shortcut.

The average user doesn't want their computer crashing all the time when they use it, they just want to use it with no hassle. Again, a good reason to use Linux.

Free

Linux, and most of its software, is free as in speech, and free as in beer. The user has the right to edit the code, and do whatever they want to it. This allows patches and bugfixes to be developed incredibly quickly by a myriad of developers using that particular program, and sent to the project maintainers upstream.

>The average user doesn't like paying hundreds of pounds for an operating system or software, freedom of price is therefore a good reason to use Linux. The average user also doesn't want buggy software, so the freedom of speech is a good reason to use Linux.

Packages

Most distributions (all?) have some form of package manager, which makes it easy to install, remove, and update software. Coupled with online repositories (such as many distributions have), this enables the user to update the entire system with one command, or even a click if a graphical package manager is present.

The average user doesn't want to wander across the internet looking for updates for all of their software, yet many of them think this is the only way due to what I call "Windows Sheep Mentality". A package manager which can upgrade all of their software, including the OS, for free is therefore a good reason for the average user to switch to Linux.

Hardware

Linux actually supports more hardware out of the box than Windows does. Okay, so the supported hardware is often specialist, that is because Linux is very popular on servers and similar things that need to just work and use the Right Thing. However, if installed on any given computer, Linux is likely to work. Windows will probably not, especially if you have some esoteric hardware or something, without umpteen driver discs.

The average user doesn't want to upgrade their computer just to use the next version of their OS; Linux works with very old hardware as well as very new, which is a reason to use Linux.

Support

Many Linux distributions offer paid support, and almost all have a large and (usually) welcoming community who will go out of their way to help someone who needs it. This community support is usually the same quality professional paid support would be, yet is free. There are also a lot of generic Linux communities online who will help with anything if they know how, and their generic solutions will often help on many distributions.

The average user doesn't want to ring tech support and talk to an Indian person, after their call has been outed across the world, only to be told they have to reinstall Windows. Free and in-depth support is therefore good.

Variety

Linux has a lot of variety in ways that Windows and OS X just can't. One example would be the abundance of window managers and desktop environments. Another, would the base system - no two distributions are the same. This variety, coupled with the fact that there are over 300 distributions, means that there is a distribution for everyone.

The average user wants to use what works best for them, they don't want to have to use someone else's standard which may not fit them very well, this huge variety is therefore an advantage.

Customisability

Without the customisability of Linux, the variety wouldn't exist. Every aspect of Linux can be edited and customised by the user - from the widget theme, right down to the init scripts controlling the booting of the system. Compare THAT to a Windows user's ability to change the wallpaper, colours, or a couple of icons.

The average user wants their computer to be, well, their computer. They want it to look how they want it to look, and the customisability gives them that.

Compatibility

Linux is compatible with most propriety file formats, including the incredibly popular Microsoft Office file formats. This means that a Linux user needn't worry about probelms sharing files with Windows users, as they can save the file in the Windows format - or even in a format such as PDF which is the same on all operating systems.

The average user doesn't want to jump through hoops for other people to be able to use their files, and Linux provides an easy way to use the same file formats as everyone else.

If you would like more information about Linux in general, please have a look at my blog (http://blog.yarrt.com) or email me (mike@yarrt.com). I will gladly answer any questions you have about any Linux-related topics.

Article Source: http://EzineArticles.com/?expert=Michael_S_Walker

Multifunction and Rugged Mobile Printers

Multifunction Printers

Computer technology is constantly improving on itself, yet one area that seem appears to receive little attention is printer technology. Many home businesses not only require printing functions but scanning, copying and faxing as well. Frequently, each of these devices is set-up as a separate piece of equipment, requiring not only significant investment of money but of office space as well. One of the newer creations in the field is the multifunction printer, with the ability to print, scan and fax. With a multifunction printer one electronic piece of equipment completes all these jobs and takes up much less space as well as costing much less money than the combination of all devices purchased separately.

However, when the key to a multifunction printer is that it does all of these functions adequately but none of them to the degree of a stand-alone device. There are several features that need to be taken into account when researching a multifunction printer that best meets one's individual needs.

The first feature that needs to be taken into consideration is the type of printer: multi-function printers can be inkjet, laserjet, or color laserjet. Inkjet printers operate by forcing variably-sized droplets of ink onto a piece of paper or similar medium. A laser printer produces text and graphics on plain paper, using the same technology as digital photocopiers; they employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.

The speed of a laser printer or inkjet printer is measured by the ppm (pages per minute), the number of pages a laser or inkjet printer can produce in one minute. Print speeds of laser or inkjet printers may vary depending on many factors such as the complexity of the document, page coverage, and the design of the printer itself. Generally, as the speed increases, the quality of the output decreases. A typical inkjet printer may print at speeds that vary between 1 to 28 pages per minute for black text and 1 to 20 pages per minute for color, photographs or graphics. The speed of a mid-range monochrome laser printer may vary between 6 to 25 pages per minute for sharp black texts and 2 to 20 pages per minute for black and white graphics. The print speed of a typical color laser printer's will vary between 6 to 20 pages per minute for black text and 1 to 12 pages per minute for color graphics. Image quality is another feature to examine and depends on the number dots per inch printed. The standard resolution of 600 x 600 dots per inch is sufficient for most everyday printing but is unsuitable for printing quality photographs or graphics. The more dots printed per inch, the higher the image quality.

The scanner function is also of great importance in a multifunction printer that can use two types of scanning technologies. A charge-coupled device (CCD) uses a light-sensitive integrated circuit that stores and displays the data for an image in such a way that each picture element, or dot, in the image is converted into an electrical charge. Contact Image Sensors (CIS), used more with flatbed scanners, place the image sensor in near direct contact with the object to be scanned in contrast to using mirrors to bounce light to a stationary sensor, as is the case in conventional CCD scanners. A CIS typically consists of a linear array of detectors, covered by a focusing lens with LEDs that allow the CIS to be highly power efficient, with many scanners being powered through the minimal line voltage supplied via a USB connection. CIS devices typically produce lower image quality compared to CCD devices but make the printer more durable. Another feature of scanning is how the documents are scanned; they can be fed in a sheet at a time, in which case an automatic document feeder that holds at least 50 sheets facilitates use, or they can be scanned over a flat-bed. The flat-bed ones allow scanning of thicker objects.

The faxing function is properly the one that may be the most simplistic. A minimum fax/modem speed should be 33 Kbps. Other fax features that might be offered are color faxing, fax broadcasting, and/or group dialing, as many multifunction printers do not offer full fax functions.
Finally, given all the functions a multifunction printer requires to carry out, a lack of internal memory is extremely noticeable. While 8 MB of memory may be adequate for home office or small office use, an efficient and effective multifunction printer that have at least 16 MB of memory or more. The greater the amount of memory, the faster that certain multifunction printer processes can be carried out.

Rugged Mobile Printers

Rugged mobile printers are built rugged laptops. Being mobile, they can pretty much fit in your pocket or glove compartment. Rugged mobile printers can be mounted in vehicles next to your rugged notebook or some are mounted directly to a rugged computing device. A rugged printer is made to take a beating and most can withstand drops from nearly 10 feet. The rugged mobile printer is geared around the MIL-STD 810F test confirming its ruggedness and ability to handle the daily rigors of everyday use. From 2.5" to 4" receipt printers, to full sheet 8" x 11" size printers. A rugged printer will handle any job while on the road printing inventory receipts or printing law enforcement issued citations.

Visit http://www.OCRuggedLaptops.com for more information about the rugged mobile printers.

Article Source: http://EzineArticles.com/?expert=Mack_Harris

Desktop Virtualization

An enormous change make be lurking in the future for desktop computing. The change is known as desktop virtualization. Desktop virtualization is a situation where the physical personal computer desktop is in a separate location from where the end-user accesses it. The personal computer accessed remotely can be at home, in the office or in a data center while the end-user can be in a different location, such as a hotel room, city, office building or an airport. This is in sharp contrast to the current environment where the end-user directly accesses the personal computer, its operating system and applications and all associated peripherals in a closed and immediate environment.

Desktop virtualization presents many advantages for information technology departments, as management of the desktop is quickly facilitated. One change in a setting in the operating system, for example Windows, can be made once in a central location and all employees receive the change once they access their virtual desktops. Previously, information technology departments either were required to visit each desktop to make this change or push the change down to each desktop from a central server.

Although desktop virtualization may appear as a panacea for information technology departments constantly endeavoring to maintain current with software patches and other updates, the practical options for large-scale adoption are just emerging. Consequently, many information technology departments are not moving so quickly to adopt this technology. Its real viability for computing has not been completely proven for massive scale adoption. Yet many are looking to this technology as a great potential for lower the operating costs of owning and managing personal computers and desktops, which usually are a large percentage of information technology budgets.

As with most innovations, there are risks involved. As the personal computer entity basically resides in the data center, the biggest risk is greater reliance on the uptime of the data center. If the data center goes down, personal computer access then becomes unavailable. In addition, it requires significant investments in up-front costs for servers, storage, network bandwidth, licenses and thin-client hardware. Software as a service may be a less expensive alternative in terms of expense and implementation as the application resides on a server and is accessed by means of a browser through the web.

Two strategies exist for the implementation of desktop virtualization: the "fate image" approach of today and the stateless strategy of the future. With the fat-image approach, the operating system and applications are combined into a single image stored on a data center server and viewed on a simplistic computer by means of various remote access protocols. The advantages to this approach are the centralization of storage and increased security for data. In the stateless approach, every time an end-user turns on their computing device, the data center creates a temporary virtual image from a set of master operating systems images and icons and delivers those to the computer. In this approach, end-users are given only the applications they need based on who they are, their privileges and what they are trying to do.

The disadvantage to the fat image approach is that the operating system and the applications are stored in the data center and if patches have to be applied, someone still has to do the work. Patching is still facilitated as it is done in the data center rather than on individual computers but every virtual desktop still requires the same patching and management as would regular desktops. In a stateless environment, such patching would be done in one place and only those who require the patched application would receive it.

As with all new technologies, adoption is slow, particularly in smaller organizations where desktop virtualization may not be an issue. However, for larger organizations, a wait-and-see approach could quickly turn into a missed opportunity sooner than expected.

Visit http://www.OCRuggedLaptops.com for more information about the rugged laptop industry.

Article Source: http://EzineArticles.com/?expert=Mack_Harris