All times are UTC-06:00




Post new topic  Reply to topic  [ 3 posts ] 
Author Message
 Post subject: IBM ceases Cell
PostPosted: Thu Nov 19, 2009 4:44 pm 
Offline

Joined: Thu Nov 11, 2004 7:34 am
Posts: 130
Location: Bielefeld, FRG
heise.de did an interview with some important IBM guy (David Turek) that confirmed that IBM's going to cease the Cell. All further development is halted. Focus is on Power and BlueGene. Parts of Cell will live on in some other incarnation. IBM encourages developers to go OpenCL.

http://www.heise.de/newsticker/meldung/ ... 64497.html


Top
   
 Post subject:
PostPosted: Fri Nov 20, 2009 3:30 am 
Offline
Genesi

Joined: Fri Sep 24, 2004 1:39 am
Posts: 1422
From something we wrote a while ago...

Broadband networks allow both data and application to travel between users. If all members on a network, that is, all computers and computing devices on the network, are constructed from a common computing module then we have a totally pervasive solution - everyone can get everything, everywhere. A common computing module with a consistent structure and the same instruction set architecture (ISA) was the objective of the CELL technology. The idea was that the members of the network, e.g., clients, servers, PCs, mobile computers, game machines, PDAs, set top boxes, appliances, digital televisions and other devices, would use the same core computer processor logic to insure compatibility. The consistent modular structure would enable efficient, high speed processing of applications and data by the network's members and the rapid transmission of applications and data over the network.

All this suggests a new programming model for transmitting data and applications over a network and for processing data and applications among the network's members (vs. data just being transferred between stand alone devices that must run the same application software to process and display the data sent). This was the original CELL programming model. It was to be a software CELL that was transmitted over the network for processing by any network member (some to a higher degree and some to a lesser degree). Each software CELL was to have the same structure and contain both applications and data. The "old" stuff would still work, but it needed to be compiled in a new way. What is created is a dynamic not unlike the advent of the fax machine -- to send or receive a fax you had to have one (and if you don't too bad!). This means the code for the applications *must* be based upon the same common ISA. All computing resources on such a network would have to have the same basic structure and employ the same ISA so any particular resource performing the processing can be located anywhere on the network and dynamically assigned to the activity required.


All that to say that while the processor itself may be discontinued, the underlying idea of the CELL technology will probably be reincarnated through the Common Platform Alliance:

http://www.commonplatform.com/

At the end of the day, people will reach whatever "cloud" they want through a device. IBM will have to go back into the microelectronics market again one day...

R&B :)

_________________
http://bbrv.blogspot.com


Top
   
 Post subject:
PostPosted: Fri Nov 20, 2009 1:05 pm 
Offline

Joined: Tue Mar 31, 2009 10:24 pm
Posts: 171
got to agree with Bill here, even though for a different reason. cell never managed to get the momentum (beyond sony), but its philosophy has been gaining more and more ground. so what does it teach? it teaches that people want to solve inherently-parallel tasks in a linearly-scalable time (i.e. scaling with resources thrown at the task), and do that at a better rate than stacked-together CPUs could offer. enter reasonably-programmable high-parallelism architectures (tm).

in accordance to that, different performance-targeting consumer solutins (in their respective power domains) have been converging toward a common long-term roadmap. naturally, some of the parties drop off along the way due to their slower pace/higher cost/sheer redundancy. but the roadmap remains the same: linear performance control over a huge domain of computational tasks. from handhelds, to supercomputers, people want the same thing - they want to be able to tackle parallel workloads in a parallel fashion. preferabbly through a slider, saying how much performance goes where today ; )

on the ever-dynamic desktop, GPGPU/CGPU (compute-GPU), conveniently interfaced through openCL, does all cell ever intended to do, often better and/or cheaper than cell. now we have CGPU solutions even in the most power-sensitive markets - e.g ziilabs (formerly 3dlabs) with their DMS/ZMS series of 'GPU-alike' architecture, targeting handhelds and such. but even the 'canonic' GPU handheld solutions are largely ES 2.0 compliant, and ready for 'high-performance computing' (HPC) though openCL. yes, all that performance is at your fingertips, ready for you to put it to work. and it scales well with your needs, we just need to start thinking in the right terms to be able to utilize it. well, seems that ibm already knew that ; )


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 3 posts ] 

All times are UTC-06:00


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
PowerDeveloper.org: Copyright © 2004-2012, Genesi USA, Inc. The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
All other names and trademarks used are property of their respective owners. Privacy Policy
Powered by phpBB® Forum Software © phpBB Group