The Highly Adaptive Global Supercomputing Grid (HAGSG) is uniquely different from most Supercomputers; primarily due to its multitasking design and privately distributed footprint.
Supercomputers like the Tianhe-2, built by NUDT and Inspur, and the IBM Sequoia, along with many others built by Sun Microsystems, Cray and others are based on a centralized model usually designed for one purpose; highly calculation-intensive tasks such as problems involving quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion), etc.
Supercomputers are typically found at major universities, military agencies, Fortune-100 companies and scientific research laboratories; they are far too cost prohibitive for the small or medium sized company to even consider utilizing its power and speed. To give you an example; developed by a team of 1,300 scientists and engineers the Tianhe-2 cost $390mm to develop
Companies like Ford Motor Company rent IBMs supercomputers to calculate extensive national marketing data, all within a few days, costing thousands of dollars per hour.
The typical Supercomputer designs are very expensive and require specific matched hardware only available from the manufacturing vendor. This means the components of the Supercomputer cannot be mix and matched with other models from the same vendor or one hardware vendor to another.
In a centralized model, the Supercomputer is located in very large, secure, and expensive buildings that are designed to withstand natural disasters, power failure and other disruptive problems that lead to downtime (which is unacceptable in the Supercomputer world). The World Wide Vector’s (WWV) Supercomputer design is a distributed model. This distributed model is called the Highly Adaptive Distributed Dataflow Architecture or “HADDFA”. The word “distributed,” according to the dictionary, means to diffuse over an area, spread or scatter.
The Highly Adaptive Distributed Dataflow Architecture or “HADDFA” is a single Supercomputer that is distributed (or located) throughout multiple small, data centers across North, Central and South America, and expanding to key locations overseas. This distributed model is also designed to withstand natural disasters, power failure and other disruptive problems that lead to downtime — without all of the expense involved in the older, outdated centralized model. In addition, the “HADDFA” model has some very unique advantages. In the “HADDFA” Supercomputer your data automatically has built-in Disaster Recovery and Fault Tolerance. This is accomplished at runtime since all data is written to a least two geographically dispersed data centers.
LET ME MAKE THIS CLEAR? THE DATA IS NOT COPIED, BACKED UP OR SYNCHRONIZED FROM ONE LOCATION TO ANOTHER. ALL DATA IS WRITTEN TO AT LEAST TWO LOCATIONS AT THE SAME TIME.
Currently, most large corporation’s annual contracts for Disaster Recovery and Fault Tolerance solutions are obscenely expensive. This feature alone is worth many times more than the price of this offering.
The “HADDFA” Supercomputer Access Gateways “SAG” are connected to the Internet, making the Supercomputer accessible to the Internet via DSL and private line connections. These “HADDFA” Supercomputer Access Gateways are the “onramps” to the “HADDFA” Supercomputer world.
The “HADDFA” Supercomputing centers are interconnected via a private fiber optic network to one of many Regional Area Array points, which form the North American Supercomputing Grid (NASG), Central American Supercomputing Grid (CASG), and South American Supercomputing Grid (SASG). Once you are authenticated into the Supercomputer, all activity occurs “inside” the Supercomputer across this private network. The “HADDFA” Supercomputer utilizes several advanced security methods to annihilate any predators attempting to access the World Wide Array. These methods include continuously variable Dynamic Forward Encryption Security, Advanced Firewalls, Intrusion Detection, Intrusion Prevention and Dynamic Port Knocking.
The “HADDFA” Supercomputer provides unmatched scalability and performance like; infinite scalability, blistering database driven performance, cross-platform compatibility and automatic configuration — with ZERO operating system registry modifications required. You can simply use your internet browser to access one of our World Wide Web landing pages to launch the World Wide Array Adaptive Client. The “HADDFA” Supercomputer “scheduler” keeps track of all Supercomputer resources and schedules “work” based on current load, computing resources and user proximity. This gives the “HADDFA” Supercomputer the unusual advantage of redeploying previous generation equipment to less critical functions, which significantly increases the hardware usability life-cycle and drastically reduces the overall Supercomputer costs.