21. Initial Context Studying the intersection of HPC/scientific computing and the cloud Data locality is a key issue for us Cloud computing looks to fill a niche in pre- and post-processing as well as generalized mid-range compute This project is an introductory or preparatory step into the larger research project
22. Sample Application Goals Make CMIP3 data more accessible/consumable Prototype the use of cloud computing for post-processing of scientific data Answer the questions: Can cloud computing be used effectively for large-scale data How accessible is the programming paradigm Note: focus is on the mechanics, not the science (could be using number of foobars in the world rather than temp simulations)
23. Technologies Utilized Windows Azure (tables, blobs, queues, web roles, worker roles) OGDI (http://ogdisdk.cloudapp.net/) C#, F#, PowerShell, DirectX, SilverLight, WPF, Bing Maps (Virtual Earth), GDI+, ADO.NET Data Services
24. Two-Part Problem Get the data into the cloud/exposed in such a way as to be consumable by generic clients in Internet-friendly formats Provide some sort of visualization or sample application to provide context/meaning to the data.
25. Context: 35 Terabytes of numbers - How much data is that? A single latitude/longitude map at typical climate model resolution represents about ~40 KB. If you wanted to look at all 35 TB in the form of these latitude/longitude plots and if.. Every 10 seconds you displayed another map and if You worked 24 hours a day 365 days each year, You could complete the task in about 200 years.
26. Dataset Used 5 GB worth of NetCDF files Contributing Sources NOAA Geophysical Fluid Dynamics Laboratory, CM2.0 Model NASA Goddard Institute for Space Studies, C4x3 NCAR Parallel Climate Model (Version 1) Climate of the 20th Century Experiment, run 1, daily Surface Air Temperature (tas) Maximum Surface Air Temperature (tasmax) Minimum Surface Air Temperature (tasmin) > 1.1 billion unique values (lat/lon/temp pairs) 0.014 % of total set
27. Application Workflow Source file are uploaded to blob storage Each source file is split into 1000’s of CSV files stored in blob storage Process generates a Load Table command for each CSV created Load Table workers process jobs and load CSV data into Azure Tables. Once a CSV file has been processed, a Create Image job is created
28. Application Workflow Create Image workers process queue, generating a heat map image for each time set Once all data is loaded and images are created, a video is rendered based on the resulting images and used for inclusion in visualization applications.
29. Current Data Loaded > 1.1 billion table entries (lat/lon/value) > 250,000 blobs > 75 GB (only blob)
30. Data Load Review Results for first subset Averaged 2:30/time period 40,149 time periods 24 per worker hour 1,672.8 worker-hours 14 active workers 119.5 calendar hours 328,900,608 total entities Near-linear scale out This represents 0.003428 % of total set