Workstations, Micro-Clusters & Supercomputers by
.: SWCS Systems "Daedalus Cluster" :.|
Multi-Node, Dual 8-Core C32 Opteron Bulldozer solution of Daedalus Prototype, Rack-Mounted and Data-Center Ready!
.: SWCS Systems "Daedalus" :.
Daedalus is our entry-level developer system to conduct research and development for our supercomputing platforms.
This system is to be a cost-effective solution to conduct R & D using OpenCL, C-to-FPGA, OpenMPI and Object Oriented Programming so that scientists,
physicists, cryptologists, signal analysts, and others can create solutions in HPC and RC.
Components include the latest AMD Opteron "Bulldozer" processors, four AMD FirePro GPUs, and an array of Virtex-6 FPGAs.
Each system features Corsair's Obsidian 800T Series Full
Tower Case with 4 Hot-Swap Drive Bays. "Daedalus" is available for order; please contact SWCS Representative by phone or email for a quote.
SWCS Daedalus Gen 1 Specifications:
* Processing: Dual-Socket, 2x AMD Opteron 4200 Series 8-Core Processors,
4x AMD FirePro V7900 GPUs, Optional Pico Computing EX-500 PCI Bridge with 6 m501 Virtex-6 FPGA Arrays, SWCS Liquid Cooling System for all processing units
* Storage & Memory:
8x 8GB DDR3 ECC RAM, 2x Western Digital RE4 7200RPM 500GB HDD Raid1, 4x Hitachi Ultrastar SATAII 2TB HDDs RAID5 Hot Swap, 3x Hitachi Deskstar 4TB HDDs RAID5 Hot Swap, 1x Hitachi Ultrastar Self-Encrypting 2TB HDD with AES 256-bit Self-Encrypting Drive Caddy
* System: SuSE Enterprise Desktop Edition, Wolfram Research's Mathematica, AMD APP SDKs, Eclipse IDE
"Daedalus Prototype" will utilize Four AMD FirePro V7900 GPUs,
which features a single processor, 1280 Stream Processors and 2GB GDDR5 RAM with 160GB/s memory bandwidth.
"Daedalus Prototype" can utilize 6 PICO Computing Virtex 6 M-501 Modules with 512MB DDR2 RAM and an x8 PCIe host interface per board, on a PICO Computing EX-500 PCI x16 Gen 2.0 Bridge Board.
.: SWCS Systems "Armitage" :.
Armitage will have two variations: the Armitage HPC and the Armitage CFS.
The HPC model will offer the same solutions as Daedalus, but in a mid-tower ATX format, and with an EVGA Dual-Processor GTX-560 Ti 2Win GPU for hardware acceleration. This system will run SuSE Enterprise Desktop Edition, and is designed
with the scientific computing specialist in mind; this is a desk-side workstation build specifically to conduct R & D using
OpenCL, C-to-FPGA, OpenMPI and Object Oriented Programming and create solutions in HPC and RC.
The CFS model will run either Windows 7 or Windows 2008 Server R2, and will be a dedicated host Computer Forensic &
Data Recovery solution. Many legacy and modern I/O ports will be offered in order to interface any data storage devices
the examiner may come across, yet will pack the necessary processing power if encountered with encrypted medium. Further, the Armitage
CFS System can interface with the Armitage HPC or Daedalus Prototype to expand the processing capabilities in order to tackle the most
difficult cryptographic situations.
Limited Runs of these systems will feature the name "SWCS Armitage Prototype" and Corsair's Special Edition White Graphite Series 600T
Mid-Tower Case, with standard models using the Corsair 650D Mid-Tower case. Both systems are available for special order. If interested,
please contact SWCS Representative by phone or email for a quote.
"Armitage" utilizes latest dual-processor GPU from EVGA, the GTX 560 Ti 2Win. Normally utilized for computer gaming, this powerful GPU offers 768 processing cores with 256.6GB/sec of memory bandwidth with 2GB of 512 bit RAM. Each of the two GPU Processors computes 1.262 teraflops of Single Precision per chip.
SWCS Armitage CFS Specifications:
* Processing: Dual Socket AMD Opteron 4100 series 6-Core 2.8GHz
processor, nVidia GTX 560 Dual-Processor GPU.
* Storage & Memory:
4x 8GB RAM, 2x2 Kingston 120GB Enterprise SSDs, each in RAID1, 4x Western Digital VelociRaptor 10kRPM 640GB HDD, WiebeTECH Forensic LabDock, 2x WiebeTECH Trayless Drive Caddies, Bluray BD+RW Dual-Layer Drive
* System: SuSE Enterprise Desktop, Windows 7 Dual Boot
.: SWCS Systems "Neuromancer" :.
SWCS System's flagship supercomputing solution. Included will be a workstation with
dedicated peripherals and an external data solution that will work symbiotically together. The system will utilize
technologies from our partners: a collaborative solution to address the ever-expanding applications in high
performance computing and cyber-defense.
Our prototype will be built by third quarter 2012.
As for the name, it is from William Gibson's book of the same title,
with the character's description in mind.
SWCS Neuromancer Workstation Specifications:
* Processing: Quad socket AMD Opteron Bulldozer series 16-Core 2.5GHz
processor, Dual AMD FirePro V9800, Pico Computing Virtex7 FPGA.
* Storage & Memory:
8x 8GB RAM, 2x Kingston SSD, 2x Western Digital VelociRaptor 10kRPM 6.0G/s 600GB HDD, 2x Western Digital RE4 7200RPM 2TB HDD
SuSE Enterprise Desktop
SWCS Neuromancer Compute/NAS Node Specifications:
* Processing: Dual socket AMD Opteron Bulldozer series 16-Core 2.5GHz
processor, 4x AMD FirePro V9800P GPU
* Storage & Memory:
8x 8GB RAM, 2x Kingston SSD, 2x Western Digital VelociRaptor 10kRPM 6.0G/s 600GB HDD, 4x Western Digital RE4 7200RPM 2TB HDD
SuSE Enterprise Server
System Solutions of ARSC
.: Arctic Region Supercomputing Center
:.: http://www.arsc.edu/ :.|
The staff of SWCS conduct testing and modeling in High Performance Computing & Reconfigurable Computing
at the Arctic Region
Supercomputing Center, with thanks and gratitude to Dr. Greg Newby, our CTO and Director of the ARSC,
and Dr. Larry
Davis of the DoD HPCMPO. The mission of this research is to study HPC in a controlled environment, and
apply the information learned. The project information & findings will then be utilized
to build the SWCS network & systems to greater standards and offer stronger resources to its users.
The ARSC has the expertise, software and hardware for High-Performance Computing (HPC). The depth of experience in providing
HPC support -- installation, management, operations and maintenance -- includes proprietary mainframes/clusters, commercial
off-the-shelf (COTS)-based systems and support of add-on technologies, as well as information security support meeting prescribed
academic and federal research requirements. ARSC has an established multi-petabyte data archive with multiple terabytes of disk as
well as an established 10Gbit/s network connected to multiple academic and federal research networks. Should your organization
find the need for supercomputing capabilities for your project, the systems of the ARSC are available
for very limited outsourced use.
If interested, please contact Ryan Wolf directly at firstname.lastname@example.org.
The Current & Past Systems at ARSC:
* Pacman: Penguin Computing
cluster comprised of Opteron processors. It has 2032 compute cores and a 109 TB Panasas version
12 file system, to include two GPU nodes featuring a total of 4 M2050 nVidia Fermi GPU cards.
Sun cluster comprised of Opteron processors.
It has 1584 compute cores and a 68 TB Lustre file system.
* Pingo: Cray XT5 system running UNICOS/lc.
The system has 3456 compute cores and a 150 TB Lustre file system.
Please check back often for updates!