Вы находитесь на странице: 1из 19

Visit: http://www.kumca2010.xtgem.

com

What Is Dual-CoreProcessor?

What Is Dual Core Processor? A dual-core processor is a single, integrated circuit with two separate cores or central processing units. Dual-core processors give a significant performance increase over single-core processors with complex software and games---particularly with multitasking. Applications and games that support dual-core technology will benefit from the dual-cores while only performing one task; however, older applications that do not support dual-core will not see much of an increase, except for the benefit of increased multi-tasking support.

Visit: http://www.kumca2010.xtgem.com

History:

The first dual-core processor was created by IBM in 2001. IBM's Power 4 processor was designed for server applications and was not available for home use. The first dual-core processor for standard home PCs was the Intel Pentium EE 840. The Pentium EE 840 was part of the Extreme Edition and carried a heavy price premium. Dual-core processors have since gone through many upgrades that have improved performance and lowered prices.

Significance:
The introduction of Dual-core processors was a huge leap in computing technology. Prior to the release of dual-core processors, having a dual-processor computer was very uncommon. Servers and high-end workstations were typically the only systems to have dual-processors. Typical PCs do not have the space required to house a dual-processor motherboard. Applications and standard versions of operating systems did not take advantage of dual-processors or dual-core processors until the introduction of affordable dual-core processors like AMD's Athlon 64x2 and Intel's Core Duo line.

Types:
While Intel released the first at home dual-core processors, it was AMD that took the early lead in dual-core technology. Intel's first Pentium line of dual-core processors were outpaced by AMD's Athlon 64x2. Quickly after this, Intel developed

Visit: http://www.kumca2010.xtgem.com
the Core Duo line which remained the leader until Intel's Core 2 Duo was released. In 2009, AMD has released newer versions of their Athlon; however, Intel's dual-core Core 2 Duo processor is the highest rated.

Benefits:
Dual-core processors allow each CPU to do separate tasks or collaborate with each other. Since the two CPUs are close to each other on the chip, lag time transferring data back and forth is greatly reduced when compared to a dual-processor system, as is the energy required to do so. Dual-core processors will typically cost less than dualprocessors, and will have more options for motherboards.

Performance:
In operating systems that support Thread-Level-Parallelism (TLP), multiple commands can be sent to the processors without a delay or response back. Previously, with single-core processors, operating systems would send a command to the processor and wait for a response before sending the next command. Dual-core processors will allow for more commands in a time-frame since there are two separate CPUs performing the calculations. This will result in a higher performing system.

What Is a Pentium Dual Core Processor?

Visit: http://www.kumca2010.xtgem.com

Dual core processors are much faster and more powerful than single core processors. Intel introduced the first microcomputer in 1971, and in 1980 was chosen by IBM to develop a chip to power their personal computers. The Intel Pentium Dual Core Processor was premiered in 2007 and by 2010 now encompasses a family of processor chips.

Two Processors:
A computer works by "processing" instructions through a chip designed to take the information and provide output that is readable by an operating system. The Pentium Dual Core Processor put two processors together on a single chip.

Visit: http://www.kumca2010.xtgem.com Frequency:


Both processors run at the same frequency, and feature a shared L2 cache of up to 2 megabytes. The Front Side Bus runs at 800 Mhz. These speeds improve execution time and are more energy efficient.

Smaller Versions:
The dual-core family includes desktop and mobile versions which are less powerful, with smaller level 2 caches and lower frequencies. The Intel Pentium Dual-Core processors E2140, E2160, E2180, E2200. E2220 use the Allendale core, which includes 2 MB of native L2 cache, with half disabled leaving only 1 MB. This compares to the higher end Conroe core which features 4 MB L2 Cache natively. Intel has shifted its product lines having the Core 2 line as Mainstream/ Performance, Pentium Dual-Core as Mainstream, and the new Celeron (based on the Conroe-L core) as Budget/Value. The E5000 series and E6000 series use the same 45 nm Wolfdale-3M core as the E7000 series Core 2s, which has 3 MB L2 cache natively. 1 MB of L2 cache is disabled, for a total of 2 MB L2 cache, or twice the amount in the original Allendale Pentiums. The Wolfdale core is capable of SSE4, but it is disabled in these Pentiums. Pentium E2210 is an OEM processor based on Wolfdale-3M with only 1 MB L2 cache enabled out of the total 3 MB.

In computing, a processor is the unit that reads and executes program instructions, which are fixed-length (typically 32 or 64 bit) or variable-length chunks of data. The data in the instruction tells the processor what to do. The instructions are very basic things like reading

Visit: http://www.kumca2010.xtgem.com
data from memory or sending data to the user display, but they are processed so rapidly that we experience the results as the smooth operation of a program. Processors were originally developed with only one core. The core is the part of the processor that actually performs the reading and executing of instructions. Single-core processors can process only one instruction at a time. (To improve efficiency, processors commonly utilize pipelines internally, which allow several instructions to be processed together; however, they are still consumed into the pipeline one at a time.) Homogeneous multi-core systems include only identical cores, unlike heterogeneous multi-core systems. Just as with single-processor systems, cores in multi-core systems may implement architectures like superscalar, VLIW, vector processing, SIMD, or multithreading. Multi-core processors are widely used across many application domains including general-purpose, embedded, network, digital signal processing (DSP), and graphics. The amount of performance gained by the use of a multi-core processor depends very much on the software algorithms and implementation. In particular, the possible gains are limited by the fraction of the software that can be parallelized to run on multiple cores simultaneously; this effect is described by Amdahl's law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or beyond even that if the problem is split up enough to fit within each processor's or core's cache(s) due to the fact that the much slower main memory system is avoided. Many typical applications, however, do not realize such large speedup factors. The parallelization of software is a significant on-going topic of research.

Multi-core processor:

Visit: http://www.kumca2010.xtgem.com
A multi-core processor is composed of two or more independent cores. One can describe it as an integrated circuit which has two or more individual processors (called cores in this sense). [1] Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package. A many-core processor is one in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient[citation needed] largely due to issues with congestion supplying sufficient instructions and data to the many processors. This threshold is roughly in the range of several tens of cores and probably requires a network on chip. In computing, a processor is the unit that reads and executes program instructions, which are fixed-length (typically 32 or 64 bit) or variable-length chunks of data. The data in the instruction tells the processor what to do. The instructions are very basic things like reading data from memory or sending data to the user display, but they are processed so rapidly that we experience the results as the smooth operation of a program.

Processors were originally developed with only one core. The core is the part of the processor that actually performs the reading and executing of instructions. Singlecore processors can process only one instruction at a time. (To improve efficiency, processors commonly utilize pipelines internally, which allow several instructions to be processed together; however, they are still consumed into the pipeline one at a time.)

A Dual-core processor is composed of two or more independent cores. One can describe it as an integrated circuit which has two or more individual processors (called cores in this sense). Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package. A many-core processor is one in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient[citation needed] largely due to issues with congestion supplying sufficient instructions and data to the many processors. This threshold is roughly in the range of several tens of cores and probably requires a network on chip.

Terminology:

Visit: http://www.kumca2010.xtgem.com
The terms multi-core and dual-core most commonly refer to some sort of central processing unit (CPU), but are sometimes also applied to digital signal processors (DSP) and system-on-a-chip(SoC). Additionally, some[who?] use these terms to refer only to multi-core microprocessors that are manufactured on the same integrated circuit die. These people generally refer to separate microprocessor dies in the same package by another name, such as multi-chip module. This article uses both the terms "multicore" and "dual-core" to reference microelectronic CPUs manufactured on the same integrated circuit, unless otherwise noted. Some systems use many soft microprocessor cores placed on a single FPGA. Each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core.

Development:
While manufacturing technology improves, reducing the size of single gates, physical limits of semiconductor-based microelectronics have become a major design concern. Some effects of these physical limitations can cause significant heat dissipation and data synchronization problems. The demand for more-capable microprocessors encourages CPU designers to use various methods with a view to increasing performance. Some instructionlevel parallelism (ILP) methods like superscalar pipelining are suitable for many applications, but are inefficient for others that tend to contain difficult-to-predict code. Many applications are better suited to thread level parallelism (TLP) methods, and multiple independent CPUs is one common method used to increase a system's overall TLP. A combination of increased available space (due to refined manufacturing processes) and the demand for increased TLP led to the development of multi-core CPUs.

Architecture:
The composition and balance of the cores in multi-core architecture show great variety. Some architectures use one core design repeated consistently ("homogeneous"), while others use a mixture of different cores, each optimized for a different, "heterogeneous", role . The article CPU designers debate multi-core future [6] by Rick Merritt, EE Times 2008, includes comments: "Chuck Moore [...] suggested computers should be more like cell phones, using a variety of specialty cores to run modular software scheduled by a high-level applications programming interface.

Visit: http://www.kumca2010.xtgem.com
[...] Atsushi Hasegawa, a senior chief engineer at Renesas, generally agreed. He suggested the cellphone's use of many specialty cores working in concert is a good model for future multi-core designs.

[...] Anant Agarwal, founder and chief executive of startup Tilera, took the opposing view. He said multi-core chips need to be homogeneous collections of general-purpose cores to keep the software model simple."

Hardware:
The general trend in processor development has moved from dual-, tri-, quad-, hexa -, octo- core chips to ones with tens or even hundreds of cores. In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications.

There is also a trend of improving energy-efficiency by focusing on performance-perwatt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (i.e. laptop computers and portable media players).
.

Dual-Core Intel Xeon Processor 5200 Series Mechanical Drawing (1 of 3)

Visit: http://www.kumca2010.xtgem.com

14

Dual-Core Intel Xeon Processor 5200 Series TMDG

Visit: http://www.kumca2010.xtgem.com
Thermal/Mechanical Reference Design

Dual-Core Intel Xeon Processor 5200 Series Mechanical Drawing (2 of 3)

Visit: http://www.kumca2010.xtgem.com
Dual-Core Intel Xeon Processor 5200 Series TMDG

15

Visit: http://www.kumca2010.xtgem.com
Thermal/Mechanical Reference Design

Dual-Core Intel Xeon Processor 5200 Series Mechanical Drawing (3 of 3)

Visit: http://www.kumca2010.xtgem.com

Software impact:
An outdated version of an anti-virus application may create a new thread for a scan process, while its GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, a multicore architecture is of little benefit for the application itself due to the single thread doing all heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interleaving of processing on data shared between threads (thread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level multiprocessor hardware. Although threaded applications incur little additional performance penalty on single-processor machines, the extra overhead of development has been difficult to justify due to the preponderance of single-processor machines. Also, serial tasks like decoding the entropy encoding algorithms used in video codecs are impossible to parallelize because each result generated is used to help create the next result of the entropy decoding algorithm.

Given the increasing emphasis on multicore chip design, stemming from the grave thermal and power consumption problems posed by any further significant increase in processor clock speeds, the extent to which software can be multithreaded to take advantage of these new chips is likely to be the single greatest constraint on computer performance in the future. If developers are unable to design software to fully exploit the resources provided by multiple cores, then they will ultimately reach an insurmountable performance ceiling.

The telecommunications market had been one of the first that needed a new design of parallel data path packet processing because there was a very quick adoption of these multiplecore processors for the data path and the control plane. These MPUs are going to replace[7] the traditional Network Processors that were based on proprietary micro- or pico-code. Parallel programming techniques can benefit from multiple cores directly. Some existing parallel programming models such as Cilk++, OpenMP, FastFlow, Skandium, and MPI can be used on multi-core platforms. Intel introduced a new abstraction for C++ parallelism called TBB.

Visit: http://www.kumca2010.xtgem.com
Other research efforts include the Codeplay Sieve System, Cray's Chapel, Sun's Fortress, and IBM's X10.

Multi-core processing has also affected the ability of modern computational software development. Developers programming in newer languages might find that their modern languages do not support multi-core functionality. This then requires the use of numerical libraries to access code written in languages like C and Fortran, which perform math computations faster than newer languages like C#. Intel's MKL and AMD's ACML are written in these native languages and take advantage of multi-core processing. Managing concurrency acquires a central role in developing parallel applications. The basic steps in designing parallel applications are: Partitioning: The partitioning stage of a design is intended to expose opportunities for parallel execution. Hence, the focus is on defining a large number of small tasks in order to yield what is termed a fine-grained decomposition of a problem. Communication: The tasks generated by a partition are intended to execute concurrently but cannot, in general, execute independently. The computation to be performed in one task will typically require data associated with another task. Data must then be transferred between tasks so as to allow computation to proceed. This information flow is specified in the communication phase of a design. Agglomeration: In the third stage, development moves from the abstract toward the concrete. Developers revisit decisions made in the partitioning and communication phases with a view to obtaining an algorithm that will execute efficiently on some class of parallel computer. In particular, developers consider whether it is useful to combine, or agglomerate, tasks identified by the partitioning phase, so as to provide a smaller number of tasks, each of greater size. They also determine whether it is worthwhile to replicate data and/or computation. Mapping: In the fourth and final stage of the design of parallel algorithms, the developers specify where each task is to execute. This mapping problem does not arise on uniprocessors or on

Visit: http://www.kumca2010.xtgem.com
shared-memory computers that provide automatic task scheduling. On the other hand, on the server side, multicore processors are ideal because they allow many users to connect to a site simultaneously and have independent threads of execution. This allows for Web servers and application servers that have much better throughput.

Licensing:
Typically, proprietary enterprise-server software is licensed "per processor". In the past a CPU was a processor and most computers had only one CPU, so there was no ambiguity. Now there is the possibility of counting cores as processors and charging a customer for multiple licenses for a multi-core CPU. However, the trend seems to be counting dual-core chips as a single processor: Microsoft, Intel, and AMD support this view. Microsoft have said they would treat a socket as a single processor[8]. Oracle counts an AMD X2 or Intel dual-core CPU as a single processor but has other numbers for other types, especially for processors with more than two cores. IBM and HP count a multi- chip module as multiple processors. If multi-chip modules count as one processor, CPU makers have an incentive to make large expensive multi-chip modules so their customers save on software licensing. It seems that the industry is slowly heading towards counting each die (see Integrated circuit) as a processor, no matter how many cores each die has.

Embedded applications:
Embedded computing operates in an area of processor technology distinct from that of "mainstream" PCs. The same technological drivers towards multi core apply here too. Indeed, in many cases the application is a "natural" fit for multi core technologies, if the task can easily be partitioned between the different processors. In addition, embedded software is typically developed for a specific hardware release, making issues of software portability, legacy code or supporting independent developers less critical than is the case for PC or enterprise computing. As a result, it is easier for developers to adopt new technologies and as a result there is a greater variety of multicore processing architectures and suppliers. As of 2010[update], multi-core network processing devices have become mainstream, with companies such as Free scale Semiconductor, Cavium Networks, Wintegra and Broadcom all manufacturing products with eight processors. In digital signal processing the same trend applies: Texas Instruments has the three-core TMS320C6488 and four-core TMS320C5441, Free scale the four-core MSC8144 and six-core

Visit: http://www.kumca2010.xtgem.com
MSC8156 (and both have stated they are working on eight-core successors). Newer entries include the Storm-1 family from Stream Processors, Inc with 40 and 80 general purpose ALUs per chip, all programmable in C as a SIMD engine and Pico chip with three-hundred processors on a single die, focused on communication applications.

List of Dual Core Processors:


Dual-core central processing units (CPUs) are those with two processing cores. Think of it as having two CPUs in one chip. You get double the processing power. The two manufacturers of CPUs, Intel and American Micro Devices (AMD), both offer a wide array of dual-core processors for home computing use. Intel's Pentium D and Core series processors offer dual-cores and above. For AMD, an "X2" in the name of a processor denotes dual-core specifications.

Intel Pentium D:
The "D" in Pentium D stands for "dual-core." Pentium D processors are commonly found in entry-level desktop and laptop models. These processors include up to 2 megabytes (MB) of level-2 (L2) cache and up to 1066 megahertz (MHz) of bus speed. For desktops, they come in the land-grid array (LGA) 775 socket package. Clock speeds range from 1.6 gigahertz (GHz) to 3.2 GHz.

AMD Athlon X2:


AMD's Athlon X2 comes in clock speeds of up to 3.2 GHz. The Athlon X2's L2 cache size is a low 1 MB, but the speedy bus runs up to 3600 MHz. This CPU is compatible with socket AM2, AM2+, and 939 motherboards.

Intel Core i3:


The Intel Core i3 series run at clock speeds from 2.93 GHz to 3.2 GHz. Their bus speeds are in Giga transfers per Second (GT/s), and they all run at 2.5 GT/s. They also all come with 4 MB of L2 "Smart Cache" cache. Intel has packaged the Core i3 with high-definition (HD) capable graphics chipsets.

AMD Phenom II X2:


The Phenom II X2 has very similar specifications to the Athlon X2, but with a higher bus speed of up to 4000 MHz. The L2 cache is only 512 kilobytes (KB) in size. These processors are only compatible with socket AM3 motherboards. Phenom II CPUs come equipped with HD graphics capability.

Visit: http://www.kumca2010.xtgem.com Intel Core i5:


Some of the Core i5 processors come in dual-core, although the higher-end models come in quad-core. The specifications are similar to Core i3 with 4MB of L2 "Smart Cache" and 2.5 GT/s bus speeds. Their clock speeds start at 3.2 GHz and run up to 3.6 GHz.

How to Take Advantage of a Dual Core Processor:


Conventional wisdom says two heads are better than one. By that logic, a computer with a dual core, or in a sense two brains, should be better than those old single-core relics. Unfortunately, one of those two brains is usually asleep. The two cores in the processor can work in parallel (multithreading) to great effect, if only the software in use at the time was written to take advantage of that functionality. While you sit around and wait for new software with multithreading capability, there are a few things you can do to take advantage of both cores.

Instructions:
 Use software that already has multithreading capability. Enhancing or modifying existing software to take advantage of that capability happens on the developer side, not with the end user. Update to new versions of your legacy software when they become available.  Upgrade your operating system. Even if your chosen software is so far unable to multithread, your OS is able to manage multiple threads globally. It's called thread-level parallelism. If you are running several single-thread programs, from Windows XP on, the OS can optimize the individual tasks through TLP. If you have a browser open, you're playing a game and there is a virus scanner running, TLP enabling allows the OS to manage those tasks through both cores, maximizing the bandwidth on two separate front side buses.  Check a current list of software that is multithread capable before you buy. Makers of resource-intensive programs were the first to jump on board, as they benefit the most from the newer capability. Graphic design, video and audio editing and computer-aided drawing programs all rushed multithread versions to market, and games are doing the same.  Contact several makers of the type of software you need and may want to buy. Ask them directly if their product can multithread, and if not, ask what their plans are for implementing that capability. Use this as a criterion in your decision making.

Advantages:
y y

Having a multi-core processor in a computer means that it will work faster for certain programs. The computer may not get as hot when it is turned on.

Visit: http://www.kumca2010.xtgem.com
y y y

The computer needs less power because it can turn off some sections if they arent needed. More features can be added to the computer. The signals between different CPUs travel shorter distances, therefore they degrade less.

Disadvantages:
y y y y

They do not work at twice the speed as a normal processor. They get only 6080% more speed. The speed that the computer works at depends on what the user is doing with it. They cost more than single core processors. They are more difficult to manage thermally than lower-density single-core processors.

Conclusion:
In the next years the trend will go to multi-core processors more and more. The main reason is that they are faster than single-core processors and they can be still improved. But in the future there will be still some applications for single-core processors because not every system needs a fast processor.