Tuesday, December 30, 2008

Electrophoresis - Seperation and purification of DNA fragments

Electrophoresis refers to carrying something by applying electricity. It is an analytical device commonly used for separation and purification of DNA fragments. A gel is used in electrophoresis which is either polyacrylamide or agarose. The former is preferred for smaller DNA fragments and the latter for larger ones. Agarose is a purified powder isolated from agar, a gelatinous material of sea weeds. Agarose powder when dissolved in water and boiled results into gel form. The gel prepared in a mixture of salt and water becomes a good conductor of electricity. The gel forms small pores the size of which varies depending on its amount in a given water. These pores act as molecular sieve. These allow the larger molecules to move slowly than the smaller molecules.

The electrophoresis box consists of a positive and a negative electrode, a shelf designed to held the gel, a comb used to form the wells within the gel, and a power supply. The DNA to be electrophoresed is digested with restriction enzymes which yields DNA fragments of unequal length. The fragments are mixed with sucrose and a dye (ethidium bromide or methylene blue) which altogether is known as loading dye. Sucrose increases the density of DNA preparation and dye increases the visibility of the preparation.

The preparation is loaded into wells at one end of the gel. At least one well is filled with reference DNA (i.e. DNA fragments of known length) for comparison with those of unknown length. Electric current is applied at opposite ends of electrophoresis chamber. A current is generated between a negative electrode at the top of loading end of the gel and a positive electrode at the bottom of the end of gel resulting in movement of fragments through pores of the gel. DNA molecules have a negative electric charges due to PO4(4-) which alternate with sugar molecules. Opposite electric charges tend to attract one another. The small DNA molecules move at faster speed as compared to larger ones. All DNA molecules of a given length migrate nearly the same distance into the gel and form bands. Each band represents many copies of DNA fragments having about the same length. After completion of electrophoresis gel is removed from the chamber and stained to make bands easily seen either with ethidium bromide (EB) or methylene blue. When gel is illuminated with UV light, fluorescent orange bands appear due to EB; methylene blue results in blue bands under normal room temperature.

Celera Genomics & HGP

In 1998, an identical, privately funded quest was launched by the American researcher Craig Venter and his firm Celera Genomics. The $300 million Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly-funded project.Celera Genomics was established in May 1998 by the Perkin-Elmer Corporation (and was later purchased by Applera Corporation), with Dr. J. Craig Venter from The Institute for Genomic Research (TIGR) as its first president. While at TIGR, Venter and Hamilton Smith led the first successful effort to sequence an entire organism's genome, that of the Haemophilus influenzae bacterium. Celera was formed for the purpose of generating and commercializing genomic information to accelerate the understanding of biological processes.
The rise and fall of Celera as an ambitious competitor of the Human Genome Project is the main subject of the book The Genome War by James Shreeve, who takes a strong pro-Venter point of view. (He followed Venter around for two years in the process of writing the book.) A view from the public effort's side is that of Nobel laureate Sir John Sulston in his book The Common Thread: A Story of Science, Politics, Ethics and the Human Genome.Celera used a newer, riskier technique called whole genome shotgun sequencing, which had been used to sequence bacterial genomes up to 6 million base pairs in length, but not for anything nearly as large as the 3 billion base pair human genome.Celera initially announced that it would seek patent protection on "only 200-300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100-300 targets. Contrary to its public promises, the firm eventually filed patent applications on 6,500 whole or partial genes.Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly-funded project's scientific paper) and Science (which published Celera's paper) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 90% of the genome, with much of the remaining 10% filled in later. In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and again in 2005, filling in roughly 8% of the remaining sequence.
HGP is the most well known of many international genome projects aimed at sequencing the DNA of a specific organism. While the human DNA sequence offers the most tangible benefits, important developments in biology and medicine are predicted as a result of the sequencing of model organisms, including mice, fruit flies, zebrafish, yeast, nematodes, plants, and many microbial organisms and parasites.In 2005, researchers from the International Human Genome Sequencing Consortium (IHGSC) of the HGP announced a new estimate of 20,000 to 25,000 genes in the human genome. Previously 30,000 to 40,000 had been predicted, while estimates at the start of the project reached up to as high as 2,000,000. The number continues to fluctuate and it is now expected that it will take many years to agree on a precise value for the number of genes in the human genome.

Goals of the original Human Genome Project (HGP)

  • identify all the approximately 20,000-25,000 genes in human DNA,
  • determine the sequences of the 3 billion chemical base pairs that make up human DNA,
  • store this information in databases,
  • improve tools for data analysis,
  • transfer related technologies to the private sector, and
  • address the ethical, legal, and social issues (ELSI) that may arise from the project.

The goals of the original HGP were not only to determine all 3 billion base pairs in the human genome with a minimal error rate, but also to identify all the genes in this vast amount of data. This part of the project is still ongoing although a preliminary count indicates about 30,000 genes in the human genome, which is far fewer than predicted by most scientists.Another goal of the HGP was to develop faster, more efficient methods for DNA sequencing and sequence analysis and the transfer of these technologies to industry.The sequence of the human DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as Genbank, along with sequences of known and hypothetical genes and proteins. Other organizations such as the University of California, Santa Cruz, and ENSEMBL present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data, because the data themselves are difficult to interpret without them.The process of identifying the boundaries between genes and other features in raw DNA sequence is called genome annotation and is the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. The best current technologies for annotation make use of statistical models that take advantage of parallels between DNA sequences and human language, using concepts from computer science such as formal grammars.Another, often overlooked, goal of the HGP is the study of its ethical, legal, and social implications. It is important to research these issues and find the most appropriate solutions before they become large dilemmas whose effect will manifest in the form of major political concerns.All humans have unique gene sequences, therefore the data published by the HGP does not represent the exact sequence of each and every individual's genome. It is the combined genome of a small number of anonymous donors. The HGP genome is a scaffold for future work in identifying differences among individuals. Most of the current effort in identifying differences among individuals involves single nucleotide polymorphisms and the HapMap.


How it was accomplished

The publicly funded groups NIH, the Sanger Institute in Great Britain, and numerous groups from around the world broke the genome into larger pieces; approximately 150,000 base pairs in length. These pieces are called "bacterial artificial chromosomes", or BACs, because they can be inserted into bacteria where they are copied by the bacterial replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pair chunks were then stitched together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing. The whole-genome shotgun (WGS) method is faster and cheaper, and by 2003 - thanks to the availability of clever assembly algorithms - it had become the standard approach to sequencing most mammalian genomes.


Whose genome was sequenced?

In the international public-sector Human Genome Project (HGP), researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones from many different libraries were used in the overall project, with most of those libraries being created by Dr. Pieter J. de Jong. It has been informally reported, and is well known in the genomics community, that much of the DNA for the public HGP came from a single anonymous male donor from the state of New York.Technically, it is much easier to prepare DNA cleanly from sperm than from other cell types because of the much higher ratio of DNA to protein in sperm and the much smaller volume in which purifications can be done. Using sperm does provide all chromosomes for study, including equal numbers of sperm with the X (female) or Y (male) sex chromosomes. HGP scientists also used white cells from the blood of female donors so as to include female-originated samples. One minor technical issue is that sperm samples contain only half as much DNA from the X and Y chromosomes as from the other 22 chromosomes (the autosomes); this happens because each sperm cell contains only one X or one Y chromosome, but not both. Thus in 100 sperm cells, on average there will be 50 X and 50 Y chromosomes, as compared to 100 copies of each of the other chromosomes.Although the main sequencing phase of the HGP has been completed, studies of DNA variation continue in the International HapMap Project, whose goal is to identify patterns of SNP groups (called haplotypes, or “haps”).

The DNA samples for the HapMap came from a total of 270 individuals: Yoruba people in Ibadan, Nigeria; Japanese in Tokyo; Han Chinese in Beijing; and the French Centre d’Etude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.In the Celera Genomics private-sector project, DNAs from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of those in the pool.

The Human Genome Projects - Benefits

The work on interpretation of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, disorders of hemostasis, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.
There are also many tangible benefits for biological scientists. For example, a researcher investigating a certain form of cancer may have narrowed down his search to a particular gene. By visiting the human genome database on the worldwide web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, diseases associated with this gene or other datatypes.Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of the theory of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data from this project.
The Human Genome Diversity Project, spin-off research aimed at mapping the DNA that varies between human ethnic groups, which was rumored to have been halted, actually did continue and to date has yielded new conclusions. In the future, HGDP could possibly expose new data in disease surveillance, human development and anthropology. HGDP could unlock secrets behind and create new strategies for managing the vulnerability of ethnic groups to certain diseases (see race in biomedicine). It could also show how human populations have adapted to these vulnerabilities.

What's Turning Genomics Vision Into Reality

In "A Vision for the Future of Genomics Research," published in the April 24, 2003 issue of the journal Nature, the National Human Genome Research Institute (NHGRI) details a myriad of research opportunities in the genome era. This backgrounder describes a few of the more visible, large-scale opportunities.

The International HapMap Project

Launched in October 2002 by NHGRI and its partners, the International HapMap Project has enlisted a worldwide consortium of scientists with the goal of producing the "next-generation" map of the human genome to speed the discovery of genes related to common illnesses such as asthma, cancer, diabetes and heart disease.Expected to take three years to complete, the "HapMap" will chart genetic variation within the human genome at an unprecedented level of precision. By comparing genetic differences among individuals and identifying those specifically associated with a condition, consortium members believe they can create a tool to help researchers detect the genetic contributions to many diseases. Whereas the Human Genome Project provided the foundation on which researchers are making dramatic genetic discoveries, the HapMap will begin building the framework to make the results of genomic research applicable to individuals.

ENCyclopedia Of DNA Elements (ENCODE)

This NHGRI-led project is designed to develop efficient ways of identifying and precisely locating all of the protein-coding genes, non-protein-coding genes and other sequence-based, functional elements contained in the human DNA sequence. Creating this monumental reference work will help scientists mine and fully utilize the human sequence, gain a deeper understanding of human biology, predict potential disease risk, and develop new strategies for the prevention and treatment of disease.The ENCODE project will begin as a pilot, in which participating research teams will work cooperatively to develop efficient, high-throughput methods for rigorously and fully analyzing a defined set of target regions comprising approximately 1 percent of the human genome. Analysis of this first 30 megabases (Mb) of human genome sequence will allow the project participants to test and compare a variety of existing and new technologies to find the functional elements in human DNA.

Chemical Genomics

NHGRI is exploring the acquisition and/or creation of publicly available libraries of organic chemical compounds, also referred to as small molecules, for use by basic scientists in their efforts to chart biological pathways. Such compounds have a number of attractive features for genome analysis, including their wide structural diversity, which mirrors the diversity of the genome; their ability in many cases to enter cells readily; and the fact that they can often serve as starting points for drug development. The use of these chemical compounds to probe gene function will complement more conventional nucleic acid approaches.This initiative offers enormous potential. However, it is a fundamentally new approach to genomics, and largely new to basic biomedical research as a whole. As a result, substantial investments in physical and human capital will be needed. NHGRI is currently planning for these needs, which will include large libraries of chemical compounds (500,000 - 1,000,000 total); capacity for robotic-enabled, high-throughput screening; and medicinal chemistry to convert compounds identified through such screening into useful biological tools.

Genomes to Life

The Department of Energy's "Genomes to Life" program focuses on single-cell organisms, or microbes. The fundamental goal is to understand the intricate details of the life processes of microbes so well that computational models can be developed to accurately describe and predict their responses to changes in their environment."Genomes to Life" aims to understand the activities of single-cell organisms on three levels: the proteins and multi-molecular machines that perform most of the cell's work; the gene regulatory networks that control these processes; and microbial associations or communities in which groups of different microbes carry out fundamental functions in nature. Once researchers understand how life functions at the microbial level, they hope to use the capabilities of these organisms to help meet many of our national challenges in energy and the environment.

Structural Genomics Consortium

Structural genomics is the systematic, high-throughput generation of the three-dimensional structure of proteins. The ultimate goal for studying the structural genomics of any organism is the complete structural description of all proteins encoded by the genome of that organism. Such three-dimensional structures will be crucial for rational drug design, for diagnosis and treatment of disease, and for advancing our understanding of basic biology. A broad collection of structures will provide valuable biological information beyond that which can be obtained from individual structures.

Dell TM Optiplex TM 360

Designed with growing business and organizations with less complex IT infrastructure in mind, the OptiPlex 360 delivers reliable, cost –effective business productivity with intelR CoreTM2 Duo processors, high-speed memory options, and integrated video support. Customizable to meet your business needs, High-speed memory options, and integrated video support. Customizable to meet your business needs, the OptiPlex 360 offers technology that provides basic manageability, security and energy efficiency. All backed by a choice of smart, desktop-focused services that provide your IT professionals that tools they need throughout the technology lifecycle, from acquisition to asset retirement. Essential business value of the PptiPlex 360 is just one of the today. Exceptional value for reliable business-class computing, featuring Intel Core 2 Duo, Pentium® dual Core, and Celeron ® processors planning support with up to a 12-month lifecycle, stable images, managed transitions, and Dell Image Watch TM to provide early notification of upcoming technology changes customizable global service and support through Dell ProSupport service options dell client manager allows easy system manageability. The right fit for basic user productivity with choice of two form factors. Time-saving tool-less design and dell exclusive Direct Detect TM troubleshooting LEDs help result in reduced maintenance and service costs. System and BIOS passwords to help prevent unauthorized access. Chassis loop lock provides physical system protection. Proactive Dell Support services help reduce risk and protect your sensitive data with hard drive data recovery and certified data destruction. Dell is committed to being the greenest PC company on the planet. And the OptiPlex 360 delivers smart energy choices so that you can:· Archive outstanding performance with less energy through Dell’s Energy Smart power management· Help reduce power consumption- and with Dell’s power supply, which is upto to 88% efficient (available after 11/77/2008 on selected models)· Recycle your current desktops free of charge with the purchase of new Dell OptiPlex.

Gigabyte GA-G31MX-S2/S3L Motherboards

GA-G3IMX-S2 supports Intel latest high performance CPU delivers the most energy-efficient performances available today. Based on Intel’s Micro Architecture, the land multi-core processor with 4 cores and two shared L2 caches provide the best capability-per-watt solutions and are an ideal choice for muti-media enthusiasts and intensive multi-tasking environments. This cutting-edge processor offers significant performance boosts and provides an overall more energy efficient platform.Intel Core2 multi-core and upcoming 45nm processors GA-G31-S3L SUPPORTS INTEL LATEST HIGH PERFORMANCE CPU delivers that most energy efficient performances available today. Based on Intel’s Micro Architecture, the Intel multi-core-watt solutions and are in ideal choice for multi-media enthusiasts and intensive multi-tasking environments. This cutting-edge processors offers significant performance boosts and provides an overall more energy efficient platform.Solid Capacitor for CPU VRMStable system operation depends upon the quality of CUP VRM (voltage regular module). GIGA BYTE adopts conductive polymer aluminum solid capacitors for CUP VRM to ensure a longer lifespan for systems in daily operation and boosts system stability under extreme conditions. CPU VRM with solid capacitors featuring better electronic conductivity, excellent heat resistance enhances system durability even operating in high temperature environment. PCI Express interface Revolutionary PCI express interface provides scalable bandwidth for multi-purpose usage. PCI-Ex16 interface delivers the utmost graphics experience; PCI-E x1interface delivers twice the bandwidth of PCI interface up to 250MB/s for new generation I/O peripheral devices.Dual Channel DDR2 1066 nu 0.CC get a jump in memory performance with the advanced technology of DDR2 1066 memory performances with the advanced technology of DDR2 1066 memory architecture by overclocking, which delivers superior performance for the most demanding applications.Intel GMA 3100Intel graphics Media Accelerator 3100 support Microsoft DirectX 9.0 AND Window vista Aero experience. The 3D enhancement of GMA 3100 highly improves the realism and graphics performance. SATA 3Gb/s storage interface the SATA specification doubles bus bandwidth from 1.5Gb/s to 3Gb/ s. Native command Queuing is a new specification that enables our of –order execution of commands for efficient retrieval of data for efficient retrieval of data. Hot Plug support allows users to insert and remove hard disk drives without shutting off power to the system. Gigabit LAN connectivity The Gigabit network interface delivers a high speed LAN connection with data transfers rate up to 1000Mb/s, providing new generation connectivity for the broadband era. Gigabit LAN is ideally for seamless internet connection such as streaming audio and video contents Speed the speed complaint motherboards of GIGABYTE proprietary innovative software such as download center, @BIOS, Q-flash, Xpress install, boor menu, and smart fun. BIOS and driver management now becomes much easier and user friendly through GAGABYTE S-series motherboards with the following elements. Excellent hardware design reinforced BIOS protection through GIGABYTE Virtual dual BIOS technology and gigabyte BIOS setting recovery technology. Unique system software such as Xpress Recovery 2, PC health monitor, HDD S.M.A.R.T. and C.O.M further strengthen the stability and reliability of your PC! RoHS complaint as a member of the global community to llok after the environment, Gigabyte complies with the European Union’s Restriction of use of certain Hazardous substances (RoHS complaint As a member of the global complies with the European Union’s Restriction of use of certain hazardous substances (RoHS) directive which limits the use of lead, mercury, cadmium and other hazardous substances in electronic products. From components and material selection to production processors, make up of accessories, packaging/color boxes etc; GIGABYTE will continue to develop RoHS compliant PC components and commit valuable resources to promoting and advancing RoHS directive goals and objectives.

Intel To Launch mobile Quad-Core Processors

Keeping in mind the advent of desktop based quad-core processors in the mainstream, it was just a matter of time before they showed up in a mobile version too. This was confirmed by Digitimes, which goes onto say that Intel will be-planning to launch its first quad-core-CPU for notebooks, the Core2 Extreme QX9300, in the third quarter this year. This new processor will be manufactured at 45nm and have a core frequency of 2.53GHz. The CPU will support FSB speeds up to 1066MHz, INCULDE 12MB L2 cash and have a maximum TDP of 45 W.While this announcement is good in terms of consumer choice, it raises a number of intresting questions. Despite the low heat dissipation of these processors, they will be a lot more power hungry as compared to standard dual-core-processor and this will directly affect battery life. Secondly with such powerful chips, the GPU laptops, which is abysmal to say the least at its very best.

Nvidia Launches Tegra Family of Processor

Nvidia has introduced the Tegra family of processors, a tiny computer-on-a- chip, smaller than a US dime (10-cent piece), designed from the ground up to enable the “visual PC experience” on a new generation of mobile computing devices.“creating Tegra was a massive challenge. Our vision was to create a platform that will enable the second personal computer revolution which will be mobile centric, with devices that last days on a single charge, and yet has theweb, high definition media, and computing experience we’ve come to expect from our PC.” Said Jen-Hsun Haung, president and CEO at Nvidia. “ Shrinking down a 50 watt Pc architecture will not create the discontinuity this industry needs. The culminatin of nearly 1,000 man years of engineering, Tegra is a completely ground-up computer-on-a-chip architecture that consumes 100 times less power. Mobile internet and computing devices built with Tegra are going to be magical. “The Tegra 650 processor is thesecond product in the Tegraline, the first being the Nvidia Tegra APX 2500 processors, enabling thenext generation of windows mobile smartphones. With thelunch of thisnew processor, the Nvidia Tegra products will reach consumers towards the end of the year. “with the growing market demand for mobile internt access, Nvidia launched the APX 2500 computer-0n-a-chip targeted at smartphones and handset earlier this year. Recognizing that mobile internet access usages will occur not just on smartphones and handsets but on computer devices as well, Nvidia announced today the Tegra architecture. Representing the first products to be targeted at the MID and portable device space, it is anticipated to bring integrated capabilities similar to the APX 2500 with Navidia’s graphic expertise, an ARM core, HD video, and advanced power management.” Said Ian Lao. Senior analyst at In-stat.Nvidia Tegra is heterogeneous processor architecture with multiple processors, each architected for a specific class of tasks- an 800 MHz ARM CPU, a HD video processor, an imaging processor, each architected for a specific class of task- an 800 MHz ARM CUP, HD video processor, and an ultra-low power of GeForce GPU. With this heterogeneous ultra-low-power efficienty of exiting products in battery-opearated computer systems running compelling visual computing applications. Tegra 650 also features all-day media processing for 130 hours audio, 30 hours HD video playback; HD image processing for advanced digital still camera and HD camcorder functions; optimized hardware supporter for web 2.0 applications for a true desktop-class internet experience; display support for 108Phdmi, WSXGA+LCD and CRT, and NTSC/PAL TV-out, direct support of WIFI, disk drives, keyboard, mouse, and other peripherals and complete board support package (BSP) to enable fast time to market for windows mobile-based designs. “With Nvidia’s Tegra processor line, we will contine to see impressive mobile innovations in windows mobile products.” Said Todd warren, corporate vice president of Microsoft’s mobile communication business. “Microsoft is dedicated to providing people best -in-class people can carry a single device for work and play.”

AMD Ships Tri-Core Processors

AMD has announced the availability of its triple-core processor, a first for the PC market. The company also updated the qud-core phenom lineup by resolving the famous “errta bug” that plagued it earlier this year and has confirmed that the quad-core opteron chips for servers will be available later in the second quarter. The AMD Phenom X3 processors delivers significant enhancements in gaming and high-definition experiences for mainstream PC customers. It provides a full HD experience with support for the latest and most demanding formats, including VC-1, MPEG-2, and H.264 on a mainstream PC. With the AMD Unified Video Decoder ( UVD), The solution can process HD playback on the better-suited GPU rather than the CPU so consumers may enjoy a smooth HD viewing experience- less lag, stailing and dropped scenes – in the latest Blu-ray titles.“In 2007, AMD committed to delivering AMD Phenom triple core processor in Q1 2008 and today the company makes good on that promise,” said Bob brewer, corporate vice president, strategic marketing. AMD, “AMD understands the today’s PC applications are best accelerated with a rang of muti-core to dual core processors, and that’s why we now deliver the broadest muti-core desktop linup in the industry.

CREATIVE X-Figo

Over the last few years the market for soundcards has all but flatlined. This once-vibrant segment was dominated by creative for so long that other solutions gradually disappeared. However, creative itself was ultimately undone. Motherboards started sporting onboard solutions that offered virtually all the functionally normal PC users needed, making external sound cards redundant.Despite this, creative continued to thrive with its X-Fi series brand, offering great functionally and a good set of feature that could be scarcely found on onboard solutions. With the recent advent of netbooks and mobility computing becoming common, creative released its X-Fig go USB solution that offers a lot of the features found in its PCI- based solutions. How does it fare? Let’s find out. In terms of design, the X-Fi does not look like anything more than a normal flash drive. The unit comes in a basic black design and offers 1GB of flash memory. This makes the unit useful as the drivers for the sound functionality come stored on the unit itself. In terms of its bundle, the X-Fi does not offer much beyond a USB extender and a well designed pair of headphone/mic. Software-wise the X-FI is pretty strong. The unit supporters creative’s X-Fi headphone surround, EAX 4.0, the much maligned crative alchemy software package that promises to restore surround sound for older games under windows vista, and creative’s wave studio software that allows causal users to tinker with audio features sucha as cleaning up hissy tracks, add special FX to music, tec.Transcend Introduces a XeRam DD3-1066Transcend information has released its retiail-packaged DDR3-1066 and DDR3-1333 240-pin DIMMs. DDR3 is the successor to DDR2 memory, and will soon become the industry standard for Pc memory modules. Compared to DDR2, it offers faster transfer speeds and better bandwidth with an 8 bit as opposed to 4-bit prefect buffer, and is a perfect match for modern systems using dual or quad-core process. Moreover, the operating voltage of DDR3 memory modules has been decreased from 1.8V to 1.5v, thus reducing actual memory power consumption by 20-30% compared to systems with DDR2 memory. Transcends DDR3-1066 and DDR3-1333DIMMs are currently available in 1GB and 2GB capacities respectively.Austain Huang, Regional Head – sales, SAARC and APAC, Trancend, said “we are extremely delighted to introduce our DD3 memory resulting in enhanced performance for Desktop PC users.Transcend’S aXeRam memory.Will deliver amazing overocking performances while maintaining rock-solid systems stability.Transcend’s DDR3-1066 and DDR3-1333 DIMMs are made of high quality 128Mx8 DDR3 DRAM chips and use robust PCBs that meet JEDEC (the joint electron device Engineering Council) standards. Each chip is selected with strictest quality and performance standards and is manufactured using small Fine-Pitch Ball Gried Array (FBGA) packages with extra contacts to assure better thermal dissipation, electrical efficiency and reliable computing quality at high clock frequencies. In addition, DDR3 memory modules incorporate all new “fly-by” architecture that provides more efficient direct communication between the controller and each DRAM chip, includes dynamic on-DIMM Termination to minimize signal reflections at higher speeds.

Windows Mobile?

While Windows Mobile 6.0 is still only a teenager in terms of its not too recent lunch, it would be rather obvious to guess that Win Mob 7.0 is being worked on even according to rumors someone out there, with the fantastic contacts that they have in Microsoft, was actually able to get their hands on rather detailed information on what we can expect from the next version of Windows Mobile. The information is so detailed with diagrams and screenshots that it boggles the mind considering that the expected release for the OX would be somewhere in 2009. But he guy who got a hold of this information has revealed that the next-gen windows mobile device is probably going to rock the iPhone’s world. This future Windows Mobile OS is apparently going to use similar motion gestures for navigating menus and other options quite like the iPhone. The word is that the devices incorporating this OS will even use the camera to detect gestures to make navigation easier and it would also be able to judge what position the phone would be in as well as where. For example if it were idle on the table or in your pocket or even handbag. It will incorporate gyroscopes and accelerometers for various purposes making the device easy to manipulate. They even state that the media playback and player will take on a new look and feel. It would sound somewhere in the league of the iPhone or some of the more recent Symbian mobiles. The new OS would also probably do completely away with a stylus and make the device. Fully funcationalble with just the fingers as far as to have just a singular button as well. Drawing, scrolling and even writing could also be totally finger controlledSCAN DISK LAUNCHES NEW DAP FOR SLOTMUSI CARDSScandisk has just unveiled a new DAP – the sansa slotmusic player. The plug and play, portable music player was specially designed for use with their slot music cards that were launched in the US. N addition to the Sansa-branded player, scandisk heas created personalized, branded slot muci players for popular artists such as Robin Thicks and ABBA. It weighs in at a little over two ounces and has dimensions of 2.75 W X 1.4375 H x 1.4375 D inches. “ With no need for computers or cords, the Sansa slot Music player gives consumers more time to play, and less time to worry abut managing or downloading their music.” Said Daniel Screeiber, senior vice president and general manager for SanDisk. “SanDisk is all abut building products that are easy for consumers to enjoy. Just insert your favorite artist’s slot music card into the Sana slot Music Player and press play.”The player doesn’t require a PC or the internet for managing music. Consumers can choose their slot Music-filled microSD cards and can pop into the device. In addition to slotMusic cards, this convenient MP3 AND MWA DRM-free only files. SanDisk has also developed a special line of Sansa slotMusic player accessories, including a Sansa card wallet, an armband, and additional slotMusic player “shells” for customizing a player to one’s own tastes. The Sansa-Branded player ship with a customizable blak shell, earphones and battery. The new Sansa slot music player, both sansabranded without cards and artistbranded including cards with additional content that may include liner notes, album art and other one of a kind content personally chosen by the artist, is expected to be available from retailers in Europe and other regions of the world in 2009 for an MSRP of about dollar 19.99 and 34.99dollar respectively.

Saturday, December 27, 2008

Internet Overtakes Newspapers As a News Source In 2008

The internet has surpassed all other media, except television, as a main source for national and international news.

According to Pew Research, 40% say they get most of their news about national and international issues from the internet, up from just 24% in September 2007. Television continues to be cited most frequently as a main source for national and international news, at 70%.

The future looks dim for television and newspapers.

For young people, though, the internet now rivals television as a main source of national and international news. Nearly six-in-ten Americans younger than 30 (59%) say they get most of their national and international news online; an identical percentage cites television.

The percentage of people younger than 30 citing television as a main news source has declined from 68% in September 2007 to 59% currently. This mirrors a trend seen earlier this year in campaign news consumption. (See “Internet Now Major Source of Campaign News,” News Interest Index, Oct. 31, 2008.)

The survey by the Pew Research Center for the People & the Press, conducted Dec. 3-7 among 1,489 adults, finds there has been little change in the individual TV news outlets that people rely on for national and international news. Nearly a quarter of the public (23%) says they get most of their news from CNN, while 17% cite Fox News; smaller shares mention other cable and broadcast outlets.

In an interview with a British newspaper The Daily Telegraph, Andy Burnham, the UK Culture Secretary, said that the Internet could be given cinema-style age ratings as part of an international crackdown on offensive and harmful online activity.

Calling the Internet "quite a dangerous place," the Cabinet minister also said, "... I think we are having to revisit that stuff seriously now. It's true across the board in terms of content, harmful content, and copyright. Libel is [also] an emerging issue.... There is content that should just not be available to be viewed. That is my view. Absolutely categorical. This is not a campaign against free speech, far from it; it is simply there is a wider public interest at stake when it involves harm to other people. We have got to get better at defining where the public interest lies and being clear about it."

International cooperation is viewed as essential by the UK Culture Secretary, and the new Obama administration offers new opportunities. "The change of administration is a big moment. We have got a real opportunity to make common cause," he says. "The more we seek international solutions to this stuff - the UK and the US working together - the more that an international norm will set an industry norm."

My view is that, despite the very negative reaction by those commenting on the article, several of the proposals mentioned by the Culture Secretary will be coming soon - probably in 2009. This interview offers a glimpse into what the current thinking is regarding Internet decency. As with other aspects of the Internet, the international challenges are immense, but UK experts are obviously working closely with their US counterparts on specific next steps.

Web ratings would be a significant, and very controversial, development for the public sector and for society as a whole. All online content would need to be classified (similar to movies but in real-time at sites like YouTube). Opponents argue that any rating systems will be biased and flawed.

No doubt, the new technology and processes required by the masses would be overwhelming. There are great arguments against government intervention. Current laws around Internet piracy can't even be enforced. What new enforcement police will be put in place? What happens to rating violators? Who decides what's what? What about sites that cross into mutiple categories (like newspapers). Is this approach "big brother" from government? How can we monitor real-time blogs, health sites, or other content that falls into various shades of gray?

I agree that the obstacles are huge, and yet I (reluctantly) support aspects of Andy Burnham's position. The negative attacks are unfair and don't offer workable solutions. We can't keep doing the same things and expect different results online. We must provide mechanisms for families to surf their values and not let a minority of "bad guys" control the Internet. While it would be best if the technology tools existed now to maintain one's integrity online without government involvement, our problems are getting worse - not better. A few weeks back, I wrote about ISAlliance's newly proposed cyber security social contract, which would also help if implemented.

What we need is easy-to-use technology to help move pragmatic proposals forward. No doubt, the big Internet players like Microsoft and Google are also involved in planning efforts. Perhaps proposals should start off with voluntary standards and extensive new training by ISPs? However, I agree with opponents that technology and legislation alone will not solve our Internet decency problems. We need to win the hearts and minds of the majority online. And yet, we also need to police the bad actors online. Setting appropriate standards (like speed limits on highways) is an important step.

How Xbox Works

The game consoles that are available today are never enough for video gamers; their attention is always focused on what the next great thing will be. In 2000, it was the PlayStation 2. The game console wars heated up as Nintendo unveiled its latest console, called GameCube. But the big news was that the computer software giant Microsoft entered the multi-billion dollar game console market with the Xbox. The console is a black box with a large "X" imprinted into the top. Microsoft Chairman Bill Gates has said that the Xbox has more power than any console currently on the market. Sony is currently the undisputed leader of the game console industry, but it has to be looking over its shoulder at Microsoft. With $500 million in its marketing arsenal, the software giant is pitting its Xbox against the PS2 in a head-to-head battle for supremacy in the $20 billion game console industry. Microsoft says that its marketing for the Xbox has been the largest effort ever for one of its products. In fact, the Xbox's marketing budget is the largest for any game console in history, easily surpassing Sega's $100 million campaign in 1998. But will money alone be enough to push Xbox ahead of the PlayStation 2? On paper, the Xbox has more brute power and speed than any game console on the market.Now, we'll take a look at this machine and see how it compares to the competition.

Inside the X

In March 2000, rumors that Microsoft was developing a game console were confirmed when Gates took the wraps off the Xbox demo unit. In January 2001, the demo model, a big chrome "X" with a green-glowing light in the middle, was replaced by a more traditional black box. As analysts predicted, the only part of the demo model to make it into the final design is the glowing green light on top of the box. The sidewinder controller pad used with the demo unit was also altered for the final Xbox design. A lot has been made of the Xbox's design, but it takes more than a cool look to sell gamers on a product. Just like a book, it's what's inside the cover that really matters. One advantage that Microsoft has enjoyed is that it has been able to sit back and watch what other game console manufacturers have done. In doing so, Microsoft's designers have examined what has worked and what has failed in recent game consoles. On the inside, the Xbox is fairly similar to a PC. But Microsoft maintains that it is not a PC for your living room. There's no mouse or keyboard to go with it. The Xbox does boast:
• A modified 733-megahertz (MHz) Intel Pentium III processor with a maximum bus transfer rate of 6.4 gigabytes per second (GBps) The Xbox possesses the fastest processing speeds for a game console to date. For comparison, the PlayStation 2 has a 300-MHz processor and a maximum bus transfer rate of 3.2 GBps. The Nintendo GameCube has a 485-MHz processor and a 2.6-GB maximum bus transfer rate. See this page for a comparison of the Xbox, GameCube and PS2.
• A custom 250-MHz 3-D graphics processor from Nvidia that can process more than 1 trillion operations per second and produce up to 125 million polygons per second Polygons are the building blocks of 3-D graphic images. Increasing the number polygons results in sharper, more detailed images. The graphics processor also supports high resolutions of up to 1920x1080 pixels. For comparison, the PlayStation 2 has a 150-MHz graphics processor and produces 70 million polygons per second. The GameCube has a 162-MHz graphics processor and produces 12 million polygons per second. It should be pointed out that the PlayStation 2 and Xbox figures are theoretical top speeds -- it's unlikely that your system will reach that limit. Nintendo's figure is considered a more realistic number for its console.
• A custom 3-D audio processor that supports 256 audio channels and Dolby AC3 encoding
• An 8-GB built-in hard drive (Having a built-in hard drive allows games to start up faster.)
• 64 MB of unified memory, which game developers can allocate to the central processing unit and graphics processing unit as needed (This arguably makes the Xbox more flexible for game designers.)
• A media communications processor (MCP), also from Nvidia, that enables broadband connectivity, and a 10/100-Mbps (megabits per second) built-in Ethernet that allows you to use your cable modem or DSL to play games online A 56K modem will be an optional addition later. Microsoft has also teamed with NTT DoCoMo, the Japanese telecommunications giant, to create net access for Japanese gamers.
• Other Xbox features include: 5X DVD drive with movie playback (functional with addition of movie playback kit) 8-MB removable memory card Four custom game controller ports (one controller sold with the unit) HDTV support Expansion port

The Games

Game superiority ultimately decides who wins the battle in the video game console industry. You could design a machine with 10 times more power and speed than the Xbox, but, if the games stink, you can forget about selling it. Having better games is what vaulted Sony over Nintendo in the late 1990s. Like the PS2, the Xbox uses proprietary 4.7-GB DVD games. Microsoft has signed deals with more than 150 video game makers who have committed themselves to developing games for Microsoft's Xbox game console. These game developers include id Software, maker of the popular Quake series, and Eidos Interactive, which makes the Tomb Raider games featuring Lara Croft. Other Xbox game manufacturers include Bandai, Capcom, Hudson, Soft, Konami, Midway Home Entertainment, Namco, Sierra Studios, THQ and Ubi Soft. Microsoft, itself a PC game publisher, is producing about 30 percent of Xbox's games. One of the most impressive qualities of the Xbox is its realistic environments. For example, characters cast shadows on each other, making for some pretty realistic scenes. The momentum of the PS2 might be too much for the Xbox to overcome -- but then again, in 1995, no one thought that Sony would surpass Nintendo in popularity.

How 3DO Creates Video Games

Video games are enormously popular all over the world. In fact, the video game industry is a multibillion dollar a year machine -- A successful video game, just like a popular music CD, can sell hundreds of thousands or even millions of copies! You have probably wondered what goes into making a good video game. You may even want to get into the business yourself. Here are some of the questions that you may be wondering about:
• Where do game ideas come from?
• How many people are involved in making a game and what do they do?
• How is a game developed?
• How does a game get to my local store? To understand the entire process of video game development, we went to the folks at 3DO. 3DO is a major publisher of video games, with several popular titles for the Nintendo 64 and other game consoles, as well as PC and Mac computer systems. Now,h we will follow the development of Portal Runner TM, a new game from 3DO. You will learn about the game's technology, how the idea was developed and how the game will be distributed. Where the Game Comes From All games start with an idea. But where that idea originates can be traced to one of several sources:
• An original concept presented by an employee
• An original concept pitched to the company by an outsider
• A sequel to an existing game
• A spinoff based on a character from an existing game
• A game based on an existing character or story (such as movie, TV or comic characters)
• A simulation of another game medium (such as board games and card games)
• A game targeted to a specific demographic
• A simulation of a real world event
• A game designed to take advantage of a specific game platform (such as the Internet or an advanced interactive game system). Once the idea is accepted by the company as a viable game, then a preproduction team is assembled to begin developing the idea into a fully realized game. How the game develops depends greatly on what type of game it is. The story line and design of a game based on an existing movie or comic character are going to be much more restricted than those for a completely original game concept. Likewise, a simulation based on a real world event, such as a baseball game, has definite boundaries in what can be done. Video games can be extremely different from one another. And while there is a huge variety of games available, most fall into certain broad categories:
• 3D Action/Adventure (Portal Runner, Army Men, Tomb Raider)
• Simulation (Army Men: Air Combat, Aero Fighters' Assault, Maestro Music)
• Sports (Sammy Sosa High Heat Baseball, Tony Hawk Pro Skater)
• Strategy/Role-playing/Adventure (Heroes of Might and Magic, Zelda, Final Fantasy)
• Fighting (Mortal Kombat, SoulFighter)
• Puzzle (Tetris, Pokemon Puzzle League)
• Shooter (Defender, Silpheed)
• Platform (Sonic, Super Mario Brothers)
• Racing (Mario Kart, Tokyo Xtreme Racer)
• Conversion (American Arcade Pinball, Who Wants to Be a Millionaire?) Of course, a lot of games include aspects from more than one of these categories, and a few games are in a category all their own. In the case of Portal Runner, 3DO took a character from one of its most popular franchises and gave her a spinoff title of her own that falls into the 3D Action/Adventure category. The character, Vikki Grimm, has figured prominently in the Army Men TM universe. Portal Runner is not considered a sequel because 3DO is taking one character and building an entirely new game universe around her. As you learn about the development of Portal Runner, remember that many of the steps in the process could change significantly for a different title based on the nature of the game being developed. Planning the GameThe preproduction team normally includes one each of the following people:
• Director
• Designer
• Software Engineer/Programmer
• Artist
• Writer Sometimes a team will not have every one of these people and other times it will have more than one person in a particular category. Another person assigned to the game from the outset is the producer. While the director provides the overall vision and direction for the game and is in charge of managing all the team members, the producer is in charge of the business side. For example, the producer maintains the production and advertising budgets and makes sure that the game stays within budget. The first thing that the preproduction team does is develop the story line for the game. Think of this like writing the outline for a novel. The story line identifies the theme of the game, the main characters and the overall plot. Also, areas in the game where a full motion video (FMV) sequence would help the story along are established. An important part of developing the story line is knowing the nature of the game. This means that the game designer is typically involved from the very beginning; he/she is responsible for things like:
• identifying traits and features of the game
• the type of gameplay and user interaction that is developed
• how the game will use the technology available on a particular platform (video game system or computer). Portal Runner is a linear game. This means that you follow a predetermined path and accomplish specific goals to complete the game. The pattern of the game is: FMV1, Play1, FMV2, Play2, FMV3, Play3 and so on until the end. Each play portion has a different look, theme and goal, all of which combine to form the game world. Linear play makes the story line much easier to create than it would be for a game that branches or has multiple endings. Branching games can contain a series of paths that all lead to the same ending. Even more difficult are branching games that can result in one of several different endings, depending on the path taken. Of course, the type of game largely determines what the story line and style can be. A puzzle or sports game would not require as detailed a story line as a 3D action or role-playing game. Once the story line is developed, the team creates a set of storyboards. A storyboard is a collection of still drawings, words and technical instructions that describe each scene of the game. These include storyboards for the FMV sequences that introduce the story and continue it between the periods of actual gameplay. Here are several examples: In addition to storyboarding the game, the designers will map out the different worlds, or levels of play, within the game during the preproduction phase. The attributes of each world and the elements contained within it are pulled directly from the story line.

Developing the Game

Once the storyboards and overall game level designs are complete, the game enters the production phase. The preproduction team expands as needed to include additional artists, programmers and designers. 3DO's artists begin developing the 3D models that will make up the worlds of Portal Runner using a software application called 3D Studio Max. Richly detailed texture maps are created for each object. While the game developers at 3DO create the actual game environment using these models and textures, another division of the company, PlayWorks, will use the same models to develop the animated FMV sequences for the game. Meanwhile, the programmers are writing custom code in C programming language that will provide the framework for the game objects. A lot of code is pulled from the company's library, which is a bank of already-developed code that can be repurposed for different games. Some of the code is the 3D engine, an application that generates all the polygons, shadows and textures that you see. Another piece of code is the artificial intelligence component. This is the logic of the game. It establishes the physics of the game, detects interaction and collisions of objects and controls movement of the characters. Development of the game code is done using a special development version of the particular game system that has increased memory, a SVGA monitor connection, a network connection and a hard drive. All the bits and pieces -- objects, textures and code -- are fed into a special utility called a tool chain that combines the pieces into one big piece of code. The tool chain makes code that is executable on a specific platform, which basically means that the game code will actually run on the game system that it was designed for. To test the game, Portal Runner director John Salera uses another specialized game console built for debugging games.

Wednesday, December 24, 2008

Prey of the Carnivore

The FBI plans to use Carnivore for specific reasons. Particularly, the agency will request a court order to use Carnivore when a person is suspected of:
• Terrorism
• Child pornography/exploitation
• Espionage
• Information warfare
• Fraud There are some key issues that are causing a great deal of concern from various sources: • Privacy - Many folks think that Carnivore is a severe violation of privacy. While the potential for abuse is certainly there, the Electronic Communications Privacy Act (ECPA) provides legal protection of privacy for all types of electronic communication. Any type of electronics surveillance requires a court order and must show probable cause that the suspect is engaged in criminal activities. Therefore, use of Carnivore in any way that does not adhere to ECPA is illegal and can be considered unconstitutional.
• Regulation - There is a widespread belief that Carnivore is a huge system that can allow the U.S. government to seize control of the Internet and regulate its use. To do this would require an amazing infrastructure -- the FBI would need to place Carnivore systems at every ISP, including private, commercial and educational. While it is theoretically possible to do so for all of the ISPs operating in the United States, there is still no way to regulate those operating outside of U.S. jurisdiction. Any such move would also face serious opposition from every direction.
• Free speech - Some people think that Carnivore monitors all of the content flowing through an ISP, looking for certain keywords such as "bomb" or "assassination." Any packet sniffer can be set to look for certain patterns of characters or data. Without probable cause, though, the FBI has no justification to monitor your online activity and would be in severe violation of ECPA and your constitutional right to free speech if it did so.
• Echelon - This is a secret network rumored to be under development by the National Security Agency (NSA), supposedly designed to detect and capture packets crossing international borders that contain certain keywords, such as "bomb" or "assassination." There is no solid evidence to support the existence of Echelon. Many people have confused this rumored system with the very real Carnivore system. All of these concerns have made implementation of Carnivore an uphill battle for the FBI. The FBI has refused to disclose the source code and certain other pieces of technical information about Carnivore, which has only added to people's concerns. But, as long as it is used within the constraints and guidelines of ECPA, Carnivore has the potential to be a useful weapon in the war on crime.

The Process

Now that you know a bit about what Carnivore is, let's take a look at how it works:
1. The FBI has a reasonable suspicion that someone is engaged in criminal activities and requests a court order to view the suspect's online activity.
2. A court grants the request for a full content-wiretap of e-mail traffic only and issues an order. A term used in telephone surveillance, "content-wiretap" means that everything in the packet can be captured and used. The other type of wiretap is a trap-and-trace, which means that the FBI can only capture the destination information, such as the e-mail account of a message being sent out or the Web-site address that the suspect is visiting. A reverse form of trap-and-trace, called pen-register, tracks where e-mail to the suspect is coming from or where visits to a suspect's Web site originate.
3. The FBI contacts the suspect's ISP and requests a copy of the back-up files of the suspect's activity.
4. The ISP does not maintain customer-activity data as part of its back-up.
5. The FBI sets up a Carnivore computer at the ISP to monitor the suspect's activity. The computer consists of: A Pentium III Windows NT/2000 system with 128 megabytes (MB) of RAM A commercial communications software application A custom C++ application that works in conjunction with the commercial program above to provide the packet sniffing and filtering A type of physical lockout system that requires a special passcode to access the computer (This keeps anyone but the FBI from physically accessing the Carnivore system.) A network isolation device that makes the Carnivore system invisible to anything else on the network (This prevents anyone from hacking into the system from another computer.) A 2-gigabyte (GB) Iomega Jaz drive for storing the captured data (The Jaz drive uses 2-GB removable cartridges that can be swapped out as easily as a floppy disk.) 6. The FBI configures the Carnivore software with the IP address of the suspect so that Carnivore will only capture packets from this particular location. It ignores all other packets.
7. Carnivore copies all of the packets from the suspect's system without impeding the flow of the network traffic.
8. Once the copies are made, they go through a filter that only keeps the e-mail packets. The program determines what the packets contain based on the protocol of the packet. For example, all e-mail packets use the Simple Mail Transfer Protocol (SMTP).
9. The e-mail packets are saved to the Jaz cartridge. 10. Once every day or two, an FBI agent visits the ISP and swaps out the Jaz cartridge. The agent takes the retrieved cartridge and puts it in a container that is dated and sealed. If the seal is broken, the person breaking it must sign, date and reseal it -- otherwise, the cartridge can be considered "compromised."
11. The surveillance cannot continue for more than a month without an extension from the court. Once complete, the FBI removes the system from the ISP.
12. The captured data is processed using Packeteer and Coolminer.
13. If the results provide enough evidence, the FBI can use them as part of a case against the suspect. The example above shows how the system identifies which packets to store. Prey of the Carnivore The FBI plans to use Carnivore for specific reasons. Particularly, the agency will request a court order to use Carnivore when a person is suspected of:
• Terrorism
• Child pornography/exploitation
• Espionage
• Information warfare
• Fraud There are some key issues that are causing a great deal of concern from various sources:
• Privacy - Many folks think that Carnivore is a severe violation of privacy. While the potential for abuse is certainly there, the Electronic Communications Privacy Act (ECPA) provides legal protection of privacy for all types of electronic communication. Any type of electronics surveillance requires a court order and must show probable cause that the suspect is engaged in criminal activities. Therefore, use of Carnivore in any way that does not adhere to ECPA is illegal and can be considered unconstitutional.
• Regulation - There is a widespread belief that Carnivore is a huge system that can allow the U.S. government to seize control of the Internet and regulate its use. To do this would require an amazing infrastructure -- the FBI would need to place Carnivore systems at every ISP, including private, commercial and educational. While it is theoretically possible to do so for all of the ISPs operating in the United States, there is still no way to regulate those operating outside of U.S. jurisdiction. Any such move would also face serious opposition from every direction.
• Free speech - Some people think that Carnivore monitors all of the content flowing through an ISP, looking for certain keywords such as "bomb" or "assassination." Any packet sniffer can be set to look for certain patterns of characters or data. Without probable cause, though, the FBI has no justification to monitor your online activity and would be in severe violation of ECPA and your constitutional right to free speech if it did so.
• Echelon - This is a secret network rumored to be under development by the National Security Agency (NSA), supposedly designed to detect and capture packets crossing international borders that contain certain keywords, such as "bomb" or "assassination." There is no solid evidence to support the existence of Echelon. Many people have confused this rumored system with the very real Carnivore system. All of these concerns have made implementation of Carnivore an uphill battle for the FBI. The FBI has refused to disclose the source code and certain other pieces of technical information about Carnivore, which has only added to people's concerns. But, as long as it is used within the constraints and guidelines of ECPA, Carnivore has the potential to be a useful weapon in the war on crime.

Carnivorous Evolution

Carnivore is apparently the third generation of online-detection software used by the FBI. While information about the first version has never been disclosed, many believe that it was actually a readily available commercial program called Etherpeek. In 1997, the FBI deployed the second generation program, Omnivore. According to information released by the FBI, Omnivore was designed to look through e-mail traffic travelling over a specific Internet service provider (ISP) and capture the e-mail from a targeted source, saving it to a tape-backup drive or printing it in real-time. Omnivore was retired in late 1999 in favor of a more comprehensive system, the DragonWare Suite, which allows the FBI to reconstruct e-mail messages, downloaded files or even Web pages. DragonWare contains three parts:
• Carnivore - A Windows NT/2000-based system that captures the information
• Packeteer - No official information released, but presumably an application for reassembling packets into cohesive messages or Web pages
• Coolminer - No official information released, but presumably an application for extrapolating and analyzing data found in the messages As you can see, officials have not released much information about the DragonWare Suite, nothing about Packeteer and Coolminer and very little detailed information about Carnivore. But we do know that Carnivore is basically a packet sniffer, a technology that is quite common and has been around for a while.

How Carnivore Work

You may have heard about Carnivore, a controversial program developed by the U.S. Federal Bureau of Investigation (FBI) to give the agency access to the online/e-mail activities of suspected criminals. For many, it is eerily reminiscent of George Orwell's book "1984." What exactly is Carnivore? Where did it come from? How does it work? What is its purpose?Now, you will learn the answers to these questions and more!

I've heard that data travels in packets on a computer network. What is a packet, and why do networks use them?

It turns out that everything you do on the Internet involves packets. For example, every Web page that you receive comes as a series of packets, and every e-mail you send leaves as a series of packets. Networks that ship data around in small packets are called packet switched networks. On the Internet, the network breaks an e-mail message into parts of a certain size in bytes. These are the packets. Each packet carries the information that will help it get to its destination -- the sender's IP address, the intended receiver's IP address, something that tells the network how many packets this e-mail message has been broken into and the number of this particular packet. The packets carry the data in the protocols that the Internet uses: Transmission Control Protocol/Internet Protocol (TCP/IP). Each packet contains part of the body of your message. A typical packet contains perhaps 1,000 or 1,500 bytes. Each packet is then sent off to its destination by the best available route -- a route that might be taken by all the other packets in the message or by none of the other packets in the message. This makes the network more efficient. First, the network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. Second, if there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message. Depending on the type of network, packets may be referred to by another name:
• frame
• block
• cell
• segment Most packets are split into three parts:
• header - The header contains instructions about the data carried by the packet. These instructions may include: o Length of packet (some networks have fixed-length packets, while others rely on the header to contain this information) o Synchronization (a few bits that help the packet match up to the network) o Packet number (which packet this is in a sequence of packets) o Protocol (on networks that carry multiple types of information, the protocol defines what type of packet is being transmitted: e-mail, Web page, streaming video) o Destination address (where the packet is going) o Originating address (where the packet came from)
• payload - Also called the body or data of a packet. This is the actual data that the packet is delivering to the destination. If a packet is fixed-length, then the payload may be padded with blank information to make it the right size.
• trailer - The trailer, sometimes called the footer, typically contains a couple of bits that tell the receiving device that it has reached the end of the packet. It may also have some type of error checking. The most common error checking used in packets is Cyclic Redundancy Check (CRC). CRC is pretty neat. Here is how it works in certain computer networks: It takes the sum of all the 1s in the payload and adds them together. The result is stored as a hexadecimal value in the trailer. The receiving device adds up the 1s in the payload and compares the result to the value stored in the trailer. If the values match, the packet is good. But if the values do not match, the receiving device sends a request to the originating device to resend the packet. As an example, let's look at how an e-mail message might get broken into packets. Let's say that you send an e-mail to a friend. The e-mail is about 3,500 bits (3.5 kilobits) in size. The network you send it over uses fixed-length packets of 1,024 bits (1 kilobit). The header of each packet is 96 bits long and the trailer is 32 bits long, leaving 896 bits for the payload. To break the 3,500 bits of message into packets, you will need four packets (divide 3,500 by 896). Three packets will contain 896 bits of payload and the fourth will have 812 bits. Here is what one of the four packets would contain:Each packet's header will contain the proper protocols, the originating address (the IP address of your computer), the destination address (the IP address of the computer where you are sending the e-mail) and the packet number (1, 2, 3 or 4 since there are 4 packets). Routers in the network will look at the destination address in the header and compare it to their lookup table to find out where to send the packet. Once the packet arrives at its destination, your friend's computer will strip the header and trailer off each packet and reassemble the e-mail based on the numbered sequence of the packets.

Definition of a packet

A packet is the unit of data that is routed between an origin and a destination on the Internet or any other packet-switched network. When any file (e-mail message, HTML file, Graphics Interchange Format file, Uniform Resource Locator request, and so forth) is sent from one place to another on the Internet, the Transmission Control Protocol (TCP) layer of TCP/IP divides the file into "chunks" of an efficient size for routing. Each of these packets is separately numbered and includes the Internet address of the destination. The individual packets for a given file may travel different routes through the Internet. When they have all arrived, they are reassembled into the original file (by the TCP layer at the receiving end). A packet-switching scheme is an efficient way to handle transmissions on a connectionless network such as the Internet. An alternative scheme, circuit-switched, is used for networks allocated for voice connections. In circuit-switching, lines in the network are shared among many users as with packet-switching, but each connection requires the dedication of a particular path for the duration of the connection.

Types of networks

Broadcast network A broadcast network avoids the complex routing procedures of a switched network by ensuring that each node's transmissions are received by all other nodes in the network. Therefore, a broadcast network has only a single communications channel. A wired local area network (LAN), for example, may be set up as a broadcast network, with one user connected to each node and the nodes typically arranged in a bus, ring, or star topology, as shown in the figure. Nodes connected together in a wireless LAN may broadcast via radio or optical links. On a larger scale, many satellite radio systems are broadcast networks, since each Earth station within the system can typically hear all messages relayed by a satellite.

Telecommunications network

Data transferOpen systems interconnection > Data transfer The network layer breaks data into packets and determines how the packets are routed within the network, which nodes (if any) will check packets for errors along the route, and whether congestion control is needed in a heavily loaded network. The data-link layer transforms a raw communications channel into a line that appears essentially free of transmission errors to the network layer. This is done by breaking data up into data frames, transmitting them sequentially, and processing acknowledgment frames sent back to the source by the destination. This layer also establishes frame boundaries and implements recovery procedures from lost, damaged, or duplicated frames. The physical layer is the transmission medium itself, along with various electric and mechanical specifications.

Data recognition and useOpen systems interconnection > Data recognition and use The application layer is difficult to generalize, since its content is specific to each user. For example, distributed databases used in the banking and airline industries require several access and security issues to be solved at this level. Network transparency (making the physical distribution of resources irrelevant to the human user) also is handled at this level. The presentation layer, on the other hand, performs functions that are requested sufficiently often that a general solution is warranted. These functions are often placed in a software library that is accessible by several users running different applications. Examples are text conversion, data compression, and data encryption. User interface with the network is performed by the session layer, which handles the process of connecting to another computer, verifying user authenticity, and establishing a reliable communication process. This layer also ensures that files which can be altered by several network users are kept in order. Data from the session layer are accepted by the transport layer, which separates the data stream into smaller units, if necessary, and ensures that all arrive correctly at the destination. If fast throughput is needed, the transport layer may establish several simultaneous paths in the network and send different parts of the data over each path. Conversely, if low cost is a requirement, then the layer may time-multiplex several users' data over one path through the network. Flow control is also regulated at this level, ensuring that data from a fast source will not overrun a slow destination.

Open systems interconnectionDifferent communication requirements necessitate different network solutions, and these different network protocols can create significant problems of compatibility when networks are interconnected with one another. In order to overcome some of these interconnection problems, the open systems interconnection (OSI) was approved in 1983 as an international standard for communications architecture by the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT). The OSI model, as shown in the figure, consists of seven layers, each of which is selected to perform a well-defined function at a different level of abstraction. The bottom three layers provide for the timely and correct transfer of data, and the top four ensure that arriving data are recognizable and useful. While all seven layers are usually necessary at each user location, only the bottom three are normally employed at a network node, since nodes are concerned only with timely and correct data transfer from point to point.

Spread-spectrum multiple access Network access > Random access > Spread-spectrum multiple access Since collisions are so detrimental to network performance, methods have been developed to allow multiple transmissions on a broadcast network without necessarily causing mutual packet destruction. One of the most successful is called spread-spectrum multiple access (SSMA). In SSMA simultaneous transmissions will cause only a slight increase in bit error probability for each user if the channel is not too heavily loaded. Error-free packets can be obtained by using an appropriate control code. Disadvantages of SSMA include wider signal bandwidth and greater equipment cost and complexity compared with conventional CSMA.

Network access Scheduled access In a scheduling method known as time-division multiple access (TDMA), a time slot is assigned in turn to each node, which uses the slot if it has something to transmit. If some nodes are much busier than others, then TDMA can be inefficient, since no data are passed during time slots allocated to silent nodes. In this case a reservation system may be implemented, in which there are fewer time slots than nodes and a node reserves a slot only when it is needed for transmission.

Since all nodes can hear each transmission in a broadcast network, a procedure must be established for allocating a communications channel to the node or nodes that have packets to transmit and at the same time preventing destructive interference from collisions (simultaneous transmissions). This type of communication, called multiple access, can be established either by scheduling (a technique in which nodes take turns transmitting in an orderly fashion) or by random access to the channel.

Using E-Smells

This digital scent technology will be able to do more than allow you to attach e-smells to your e-mails. Imagine watching The Patriot on your DVD player with a DigiScents device plugged into it -- as the Colonial army's cannons blast, you can actually smell the gunpowder. Or, as the British army marches across the battlefield, you can smell the grass beneath them. The scent of the ocean could be emitted during scenes in which Benjamin Martin's (Mel Gibson) family seeks sanctuary in a freed slave village on the South Carolina coast. The whole idea here is to increase the realism and enhance the viewing of your favorite movies.The same type of effect could be created for your favorite video games. While consoles like PlayStation 2 are designed to enhance the realism of video game graphics, a digital scent synthesizer could take games to a whole new level. Imagine smelling the bad guy who is approaching before you actually see him. Developers of racing games could embed the smell of burnt rubber or gasoline to make their games more realistic.Before being attached to movies and games, Internet odors will likely permeate through Internet advertising. Just as advertisers used scratch and sniff technology a couple of decades ago, they will likely use the novelty of digital scents to peddle their products now. Coca-Cola could embed their cola smell into banner ads, which could be triggered by a user scrolling over the ad. Suddenly, you're thirsty for a Coke. Sounds like pretty effective advertising.Consumers may also benefit from this aromatic technology. With online spending on the rise, shoppers will now be able to sample some of the goods that they buy, including flowers, candy, coffee and other food products. Soon, you'll be able to stop and smell the roses without leaving your workstation.

Creating a Virtual Stink

Can you imagine a world with no smells? Think of some of the smells that you would never be able to enjoy, like homemade cookies, flowers or that scent that follows a summer rain. Smell adds so much to our experiences. Of course, without smell there is also no taste, since our sense of taste is almost completely dependent on our sense of smell. This world without smell exists on the Internet -- but that is about to change. You will soon have your choice of two computer peripheral devices that will make your nose as involved in your Web experience as your eyes and ears. Let's take a look at these devices.Smell Personal Scent SynthesizerIn Oakland, Calif., DigiScents, Inc. is developing a digital scent device, called the iSmell. They are fully aware of how people will respond to the device's tongue-in-cheek name. Mentioning the iSmell to a friend is likely to provoke instant laughter. The company hopes the device's name will grab consumers' attention and help to sell this gadget designed to transmit digitized smells through your computer.A prototype of the iSmell Personal Scent Synthesizer is shaped like a shark's fin, and it will be connected to your PC through a serial or universal serial bus (USB) port. It can be plugged into any ordinary electrical outlet. Here's how it works:• DigiScents has indexed thousands of smells based on their chemical structure and their place on the scent spectrum.• Each scent is then coded and digitized into a small file.• The digital file is embedded in Web content or e-mail.• A user requests or triggers the file by clicking a mouse or opening an e-mail.• A small amount of the aroma is emitted by the device in the direct vicinity of the user.The iSmell can create thousands of everyday scents with a small cartridge that contains 128 primary odors. These primary odors are mixed together to generate other smells that closely replicate common natural and manmade odors. The scent cartridge, like a printer's toner cartridge, will have to be replaced periodically to maintain the scent accuracy.DigiScents has formed partnerships with several Web, interactive media and gaming companies to bring scents to your computer. Real Networks plans to make DigiScents' ScentStream software available to its more than 115 million RealPlayer users. DigiScents has not announced when the iSmell will be available or how much it will cost.SENX Scent DeviceTriSenx is planning to take you one step further, by allowing users to not only download scents, but to print out flavors that can be tasted. The Savannah, Ga., based company has developed a patented technology that allows users to print smells onto thick fiber paper sheets and taste specific flavors by licking the paper coated with the smell.The SENX machine is a printer-like desktop device that will produce smells based on data programmed into a Web page. SENX stands for Sensory Enhanced Net eXperience. Like the iSmell, the SENX machine will be activated by user actions. The fragrances and aromas are stored in a disposable cartridge within the SENX. This cartridge has 20 chambers, each holding a distinct scent. Thousands of smells can be created with a 20-chamber cartridge and a 40-palette rendition, which composes two separate cartridges.The SENX is 5.5 inches wide, 8 inches long and 2.5 inches tall (14 x 20 x 6.4 cm). Users will plug the device into an open external COM port on their computers, and it will be powered by a DC 6-volt rechargeable battery. TriSenx is already taking orders for their SENX machine, which will cost $269 and include the SenxWare Scent Design Studio Software.

How Internet Odors Will Work

Many of us spend just as much time in cyberspace touring the electronic landscapes of the Internet as we spend offline. But for all of the time we spend in front of our computer monitors, this virtual world lacks many of the real world's most precious attributes. One of the biggest drawbacks of the cyber world is its lack of realism. Most of us are born with five senses, allowing us to see, hear, touch, smell and taste; yet the Internet takes advantage of less than half of these.When you log onto your computer, what senses are you using? Sight is probably the most obvious of the senses we use to collect information. The Internet is almost completely vision-based. While audio technology, like MP3 music files, have made a lot of noise recently, the Internet is made up mostly of words and pictures. You can also throw in touch as a third sense used in computer interaction, but that is mostly in terms of interfacing by way of keyboard and mouse. Since the beginning of the Internet, software developers have chosen to ignore our senses of smell and taste. However, there are at least two American companies who are planning to awaken all of your senses by bringing digital odors to the Internet.We have the ability to recognize thousands of odors; and some scientists believe that smell has the power to unlock memories. In the section of How Dewsoft Stuff Will Work, you will learn how smells will be transmitted to your desktop and what other possible applications this technology could present.

Information and Communication Technology

Today the Information and Communication Technology becoming more broaden & advanced, and provides all the users with a broad perspective on the nature of technology, how to use and apply a variety of technologies, and the impact of information and communication technologies on themselves and on society. Adopting the facility that has been given by the technology is not intended to stand alone, but rather to be adopting and extending all over the world on the public level should be far better. What technology is using people from different part of the world is different on their understanding because of their level of difference in country development and technology implementation. VoIP (Voice over IP) is the top most popular and burning communication technology for the upcoming decades. So, let’s talk about it.

VoIP Gateways:

An Overview
Gateways have become a central, yet complex, component in most state-of-the-art VoIP systems. Although they’ve been around for years, VoIP gateways remain something of a mystery. What, exactly, are these devices gateways to? Do they lead the way into a data network, a voice network, telephones, network management or outright confusion? In a way, they actually open the door to all of these areas. That's because VoIP gateways have become a central, yet complex, component in most state-of-the-art VoIP systems.VoIP gateways act as VoIP network translators and mediators. Perhaps most importantly, they translate calls placed through the public switched telephone network (PSTN) - the "regular" telephone system - into digital data packets that are compatible with an enterprise's VoIP system. VoIP gateways can also help direct VoIP calls to specific users with the assistance of built-in routing tables. Additionally, the units can translate between different VoIP protocols, such as H.323 and SIP, enabling compatibility between various VoIP systems and devices.Given all of these benefits, it's easy to see why VoIP gateways are highly recommended for virtually any VoIP implementation. Yet this hasn't always been the case. In VoIP's early days, system designers often "VoIP-enabled" switches and routers to handle key gateway tasks. But as VoIP networks grew larger and more sophisticated, and as end users began demanding higher quality and more reliable service, most designers began specifying standalone VoIP gateways for their systems.

Various Vendors

With VoIP technology steadily gaining momentum, VoIP gateway shoppers have an array of products to choose from. Leading VoIP gateway vendors include Cisco Systems, Mediatrix Telecom, Quintum Technologies, Stratus, Welltech Computer and Nortel Networks. VoIP gateways can be either hardware- or software-based. Hardware-based VoIP gateways - by far the most widely used approach - are available as standalone boxes, chassis cards or modules. Hardware VoIP gateways, while generally most expensive than their software counterparts, are usually preferred because they are viewed as more reliable, provide built-in interfaces and don't consume computer processing power.In the enterprise market, VoIP gateways come in many different configurations. Buyers can select from products that offer numerous phone, fax machine, PBX and PSTN support capabilities. Additionally, for large enterprises with offices and branch operations spread around the country or world, VoIP gateways provide an effective way to extend and distribute voice communications systems.At the market's low-end, it's possible to find a basic VoIP gateway, featuring a phone jack, Ethernet router and firewall, for under $200. A device at this price level would likely offer a minimum of three ports: a standard RJ-11 telephone jack and two RJ-45 ports - one for a broadband modem/router and one for a computer or network sharing device. Such a system would be capable of handling the voice needs of a home or small office.A mid-level VoIP gateway, costing anywhere from $400 to $2,000, offers additional interfaces supporting a wide range of phone system and network devices. These products also include various quality of service (QoS) features, network-thrifty voice compression and built-in security capabilities, such as encryption. The primary selection criteria of these VoIP gateways is the maximum packet throughput and the number of simultaneous phone calls supported. A VoIP gateway buyer needs to know just how much capacity his or her VoIP system needs, and these figures can only be arrived at by a thorough professional analysis.At the market's high end are Carrier Class VoIP gateways, costing several thousand dollars. Widely used by both telephone carriers and large enterprises, these devices support hundreds or even thousands of channels for advanced voice services, such as interactive voice response (IVR), a technology that allows callers to select an option from a voice menu. Other advanced functions supported by carrier-class VoIP gateways include voice recording, distributed voice announcements and conference calls.

Getting Smarter

Building new VoIP gateway features and functions, such as faster translations and support for emerging VoIP standards, represents a major challenge for vendors. Fortunately, many enhancements are software based, and can be delivered to customers fairly quickly and inexpensively in the form of a simple software upgrade.Perhaps the biggest trend in VoIP gateway technology is the rapid shift toward "smarter" products. Most major vendors are developing products that work with a wider mix of VoIP products and technologies, paving the road to enhanced multi-vendor interoperability. This trend promises to allow businesses to cut costs by enabling them to purchase products from any company that offers the best features at the best rather than from a single vendor.In the months and years ahead, VoIP gateway customers can expect more products, enhanced features and increased interoperability. These trends promise to help enterprises more easily build, maintain and upgrade VoIP networks that support both inexpensive and high-quality calls.

Building VoIP Gateways

One San Francisco hotel's experience installing a gateway-based VoIP system. White Star is a large hotel located in US’s West Coast serving guests coming from Asia, Europe, Latin America and US. With a majority of its customers being business persons, there is a large volume of long-distance and international phone calls made from the hotel which are routed in the traditional telephone network (PSTN).A feasibility study on VoIP was carried out and concluded with the following two main points:
• The VoIP voice quality is indistinguishable from the traditional phone calls.
• Rates for VoIP calls charged by Savytel represent a large saving, compared to the rates charged by the traditional telephone service providers

Gateway VoIP Implementation

One San Francisco hotel's experience installing a gateway-based VoIP system. White Star is a large hotel located in US’s West Coast serving guests coming from Asia, Europe, Latin America and US. With a majority of its customers being business persons, there is a large volume of long-distance and international phone calls made from the hotel which are routed in the traditional telephone network (PSTN).A feasibility study on VoIP was carried out and concluded with the following two main points:
• The VoIP voice quality is indistinguishable from the traditional phone calls.
• Rates for VoIP calls charged by Savytel represent a large saving, compared to the rates charged by the traditional telephone service providers.

Factors affecting VoIP quality

There are several factors that profoundly impact the quality of voice over the Internet. These factors can be described in terms of their general affect on VoIP quality: Negative or Positive.Negative FactorsOf the three negative factors for VoIP performance, the first one is delay, which results in echo and talker overlap. The second one is jitter, which is essentially the variation in delay. The third problem is packet loss. These factors are explained in much more detail below.DelayDelay results in echo and talker overlap. Echo becomes a problem when the round-trip delay becomes high. Talker overlap (the problem of one caller stepping over the other talker’s speech) becomes significant if the one-way delay becomes greater than 250 milliseconds.JitterJitter is essentially the variation in delay. This is primarily introduced because of the variation in inter-packet arrival time.

Packet Loss

Packet LossPacket loss is a constant problem in packet-based networks. In a circuit-switched network, all speech in a given conversation follows the same path and is received in the order in which it is transmitted. If something is lost, the cause is a fault rather than an inherent characteristic of the system.Apart from these factors there could be impairments caused by codecs. These impairments are due to the distortion introduced by the codec and due to the interaction of network effects with codec operation. Speech coding and compression Both speech coding and compression have been used in the traditional telephony for over two decades. With the exception of the local loop, almost all voice is carried over the PSTN in digital format. The received analog voice undergoes an analog-digital conversion at 8000 samples per second with 8 bits per sample, producing a 64 kbps digital data stream. A codec is the device that performs the conversion from analog voice into a digital format and vice-versa. The standard method used in traditional telephony is PCM (pulse code modulation) implemented by using a codec that conforms to ITU-T standard G.711. Most humans can hear sound up to about 20 KHz, but the traditional telephony uses low-pass filtering to remove everything but approximately the lower 4 KHz of the speech signal. In addition to this, voice over packet networks commonly use low bit rate codecs for compressing the received noise. These low bit rate codecs preserve the parts of the speech that are of important to the human listener taking out those parts that are not of any importance such as silence, redundantly long words. This is generally known as perceptual coding and is used in a number of other areas too, such as MPEG-2 video compression, JPEG image compression and MP3 audio. Standardized codecs have been tested with multiple speakers and multiple languages. The results can be tabulated as below.Here MOS is the measurement for voice clarity. This is explained in detail later in this chapter.Positive FactorsOf the two positive factors for VoIP performance, the first one is bandwidth, which is absolutely necessary for adequate performance. The second factor is prioritization. Prioritization becomes increasingly important as the network gets congested.