Evolutionary Fabric. Revolutionary Scale.

Architecture, Cloud, Customers, HPC, Just the facts, Trends No Comments »

Quite catchy - Cisco way to Unified Fabric Architecture…

Its promise was to deliver flexibility across Physical, Virtual and Cloud environments for any application – now promises enhancements for new age massively scalable Data Centres to deal with Big Data/Hadoop environments and bare-metal deployments. These data centres have massive blade density and demand flexible workload mobility (VM’s freely moved) not compromising on plug-and-play capability plus throughput on network and storage transport.

Unified Fabric will virtualize data centre resources and connect them through a high bandwidth network that is very scalable, with high performance and enables convergence of multiple protocols onto a single physical network.

@Locuz we have managed to exploit this further by being able to bring automation to the application layer. Our HPC and High Performance Storage capabilities are coming handy as Business computing has been echoing the same needs that we kept executing for long enough now.

Want to build the next generation Data Centre, review our reference architecture and do not hesitate to write to preetam.oswal@locuz.com

Cheers!

Uttam

Interesting Comment by Neelesh Gupta

HPC, Just the facts, Trends No Comments »

Neelesh is my childhood friend. We recently got connected after some 20 years – thanks to Web2.0… To my last post “High Perfromance Computing – not the same anymore”, he made a very interesting comment and it cannot stay tucked inside the comment corner of my blog but be up here for everyone to read. Neelesh is now based in NY and works for Citibank as Vice President (I can now guess VP for what?).

Thanks Neelesh!

====================================

Uttam,

Its interesting you mention about CUDA and the advent of GPU based computing.

This is something most of the high-end investment banking houses here in NYC and also in the UK are seriously looking at. We currently use symphony based grid orchestration for lot of our compute power needs for various risk calculations (monte-carlo simulations, prepayment modeling and fixed income risk calculations).

But now several teams are trying to evaluate moving to GPU based computing. My initial reactions to this was that GPU based computing is good if you are doing a “pure-maathematical-grind” type of calculations rather than something that requires an “if-else” type structured logic. Well that does not come to me as a surprise as GPUs were primarily built for that kind of stuff – image rendering or graphics type programming, where matrix algebra is the cornerstone for all computations.

What we were a little hesitant as part of our evaluation and future possible adoption was that someone using CUDA is sort of married to the architecture and its hard to unplug out of it. Most of the code logic in CUDA world is very intrusive and not platform agnostic. Might not be a big issue but this is something software engineers are usually cautious about. (P.S: we were looking at the NVIDIA based products).

But on the flip side there is huge-huge cost savings in hardware needs and data center needs. We were amazed by the amount of reduction we would have in h/w if we ever redid all of our code using the CUDA based API.

While this space is sure to evolve in the very near future in terms of compute power needs, I also have a strong suspicion that the likes of Intel/AMD will try to put something in place in their architectures to prevent or limit folks from leveraging the massive GPU power or at least make sure that they are somehow the part of the equation. At the end of the day the “instruction-set” or the brain power still is with the main CPU and they (Intel/AMD) could tweak things around…

Cheers!!

====================================

High Performance Computing – not same anymore

Customers, HPC, Just the facts, Trends 2 Comments »

Scientific and Enterprise computing that are CPU intensive is not going to be same anymore. GPU architectures are making those large G/T/P FLOP systems faster, greener and smaller, also cheaper.

More and more applications are getting accelerated with CUDA. Among good examples are – CFD solvers running at 1/20 of resource for a given iteration; HMMER running at 62x time faster; Monte Carlo Simulations running up to 50x time faster.

The GPU architectures from CUDA and FireStream have already made it to Top500 list sometime ago which has been a milestone and is fast moving up the charts, I am sure the 35th list of Top500 is going have much more of GPU+CPU. It is only matter of time that desktops would be replaced with supercomputers. The GPU’s help offloading non-sequential parts of application from the CPU and achieves performance of nature described before.

More interesting to watch is if CUDA is really breaking the dominance of Intel and AMD? And if more and more developers will optimize applications with OpenCL or DirectX for CUDA? What will even happen to ClearSpeed?

Cheers!

Uttam

LHC & HTC

Events, HPC No Comments »

Large Hadron Collider, the world’s largest scientific experiment is embarked and it will get intense over next few months. The current stage is warm-up and calibration. The scope of the experiment is massive. While we would have all read and heard about the complexity involved including the huge accelerator ring, the collision and guiding technique, the interplanetary space etc and therefore the expected outcome thru findings – Higgs boson, quark-gluon plasma and beauty quark or ‘b quark’ – is hard to get to terms with.

LHC

I am excited, more so to realize how the world is coming together and collaborating by exploiting scientific computing and power of network to aid an experiment of this scale. The computing resources within CERN aren’t enough in spite of being really enormous and therefore data is being provided to hundreds of sites around the globe to thousands of physicists contributing to the experiment.

The CERN facility hosts cluster with over 15,000+ cores and is magnitude of several 10 TFlops, plus with over other 6000 PC’s being part of the facility, the storage is 7000 TB as they embark; but as LHC runs full capacity for over 200 days the need to store and process data would change to a magnitude never used before. Every second, 600 million particle collisions are measured, and scientists filter out the thousand or so that are interesting. The electronic ‘photo’ of each event requires 1 to 2Mb of storage. And thru the Tiered model of computation over 100,000 computers are likely to participate.

The data storage requirement will eventually be in the order of 20 petabytes per year.

All the scientists in CERN and across the world demanded a new approach to data storage, management, sharing and analysis – tasks handled by the LHC Computing Grid (LCG) project. I had a chance to know little about the LCG from a scientist in TIFR at Mumbai sometime ago. The LCG unlike the conventional Grid is a less overhead protocol and much more efficient.

LCG

The Interconnects in use and the job submissions (bear in mind this is not HPC but HTC – High Throughput Computing) is tremendous.

I am glued and will follow the progress closely.

Cheers!

Uttam

HPC Trends from IDC

HPC, Trends No Comments »

Interesting read on HPC trends, just wondering if IDC missed writing how Locuz would be the Global Challenger :)

Have a Great 2008!

Uttam

Top 500

HPC No Comments »

India representing itself with 9 of the Top 500 Super Computers in SC07 up from 8 is remarkable. But being listed among top10 at #4 position by EKA built by CRL, Pune is breathtaking at 117.9 Teraflop.

Uttam

Lustre to use OpenSolaris ZFS

HPC No Comments »

Lustre, the open source HPC filesystem has already done a lot of good to HPC users where the servers handle terabytes of information coming across from 100′s to 1000′s of cluster systems.

The news from CFS about using OpenSolaris ZFS on Lustre servers is going to be doing even more good. ZFS is a revolutionary filesystem and is best fit to what Lutre should go on to being in the future.

This will enable faster data integrity, better scaling that none can match, and data managment which ZFS is known for.

At Locuz we are closely looking at this whole piece and hope to take this advantage to many of our customers.

WP Theme & Icons by N.Design Studio | Proudly powered by WordPress