Selecting the right SoC is more important than the UI!

Designing Adam 2

Hello,

There were many questions on the main blog on the switch from Tegra to OMAP. I thought we should clarify this.

So which is better, Tegra or OMAP? NVidia will say Tegra of course and TI will say OMAP. Would that mean we should go by the benchmarks? Or may be sheer specifications of both the SoC? Doesn’t OMAP’s memory bandwidth is more than Tegra 3 and Snapdragon? But Tegra 3 is Quad Core, and even GPU is updated? Then why iPad 2 beats Tegra 3 by miles on GLBenchmark? We had a lot of similar questions while we wanted to opt for one. If you followed Kernel developments you’d know that OMAP was definitely the next SoC supported by Google, so this decision had to be made on our end and fast.

Answer came from a very experienced veteran in the industry (one of our 3 mentors), who…

View original post 436 more words

Advertisements

USB 3.0 : Is it just the speed?

Chances are all your peripherals connect to your computer through the USB port. While those that don’t make you curse the makers and question why couldn’t they have made your HD Video Camera connect over USB instead of Firewire? There sure are other situations where you feel that data transfer over USB is painfully slow! For example, my Nokia N95 is excruciatingly slow to sync with my computer when I am transferring huge amounts of data, like videos or a completely new playlist!

wpid-25364d1251259996-slow-usb-transfer-slowassusb2-2010-10-23-15-17.png

So, although USB did save us from the tangle of proprietary connections (<sarcasm>™the iPods and iPhones still have their own Dock Connector</sarcasm>) of the slow world of serial and/or parallel ports, it’s speed hasn’t evolved much with the rate at which we consume more and more data, and that too in the portable form. A 1-hour HD movie may take ages to transfer from the camera to the computer! This has long been unacceptable but there was no choice!

wpid-1031047-2010-10-23-15-17.jpg

Well, USB 3.0 is here, and maybe now everyone *will* use the USB port for the peripherals! While the theoretical jump in speed is from the 480MBPS, where USB 2.0 would max out, to 5GBPS! It’s not just the speed, but the power consumption has gone down 3x! This has been made possible by allowing the devices to go into a low power IDLE mode when not in active use. Further reduction in power consumption was possible by the doing away of wasteful bits, thereby increasing the efficiency of data transfer while reducing overhead.

wpid-superspeed-2010-10-23-15-17.jpg

And, all this, while retaining backwards compatibility with hosts that support only USB 2.0. Also, physical compatibility had to be maintained.

wpid-c0582-figure3-2010-10-23-15-17.gif

In case you would like to understand more about how this 10x increase in speed was achieved while achieving 3x power reduction as well as maintaining physical compatibility, read the gory details at EE Times.

The first of the GPU/CPU combo SoCs

Microsoft has come out with the latest XBOX 360 SoC which is a combination of GPU and CPU on the same die, beating all the other contenders by a fair margin. The new system was unveiled by the Microsoft engineers at a Hot Chips conference.

The system diagram (from an MSDN blog entry):
wpid-x360soc-thumb-640xauto-15936-2010-10-18-00-584.png

From what is known about this hardware:

  • It is based on the IBM/GlobalFoundries 45nm process
  • It is the first “consumer”-oriented chip to include the CPU, GPU as well as memories and I/Os on the same die, merged in a single piece of silicon
  • The reduction in transistor geometry (from 90nm to 45nm) has allowed Microsoft to reduce power consumption and heat dissipation, allowing Microsoft to reduce the cost of the system with the reduced cooling requirements as well as a smaller footprint power supply
  • There is a module “FSB Replacement”, which, replicates the traditional interconnect between the GPU and the CPU when they are discrete. But, here, the engineers did not go for an LLI (Low Latency Interconnect), and instead opted for introducing artificial latency to keep the system performance at par with the existing version.
  • The “Red Ring of Death” problems should be far rarer now.

For a more in-depth analysis, head over to ArsTechnica.