OpenACC April Monthly Highlights are full of the latest OpenACC news, events, resources and more. Learn about upcoming events, including ISC, and explore GTC recorded sessions covering a variety of OpenACC topics.
Check out the latest in OpenACC this month including the PGI 18.1 release, GTC 2018 activity, paper highlights, upcoming events and a call for paper submissions.
NVIDIA Volta Tensor Core GPU achieves new AI performance milestones in ResNet-50 for a single chip, single node, and single cloud instance. Explore the performance improvements.
GPU Computing with Python and Anaconda: The Next FrontierNVIDIA
Learn how Python is becoming the glue that binds data science, how rapid integration empowers data scientists to combine new technologies, and the two primary goals in store for Anaconda.
OpenACC April Monthly Highlights are full of the latest OpenACC news, events, resources and more. Learn about upcoming events, including ISC, and explore GTC recorded sessions covering a variety of OpenACC topics.
Check out the latest in OpenACC this month including the PGI 18.1 release, GTC 2018 activity, paper highlights, upcoming events and a call for paper submissions.
NVIDIA Volta Tensor Core GPU achieves new AI performance milestones in ResNet-50 for a single chip, single node, and single cloud instance. Explore the performance improvements.
GPU Computing with Python and Anaconda: The Next FrontierNVIDIA
Learn how Python is becoming the glue that binds data science, how rapid integration empowers data scientists to combine new technologies, and the two primary goals in store for Anaconda.
NVIDIA's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
At the 2018 GPU Technology Conference in Silicon Valley, NVIDIA CEO Jensen Huang announced the new "double-sized" 32GB Volta GPU; unveiled the NVIDIA DGX-2, the power of 300 servers in a box; showed an expanded inference platform with TensorRT 4 and Kubernetes on NVIDIA GPU; and revealed the NVIDIA GPU Cloud registry with 30 GPU-optimized containers and made it available from more cloud service providers. GTC attendees also got a sneak peek of the latest NVIDIA DRIVE software stack and the next DRIVE AI car computer, "Orin," along with developments in the NVIDIA Isaac platform for robotics and Project Clara, NVIDIA's medical imaging supercomputer.
Get updates about OpenACC. This month focuses on: A new OpenACC Online Course, book and number of exciting events highlighted in the OpenACC September Update
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming NVIDIA GTC 2019, complete schedule of GPU hackathons and more!
Read updates highlighting what’s hot in high performance computing, with this week's edition focusing on news of NVIDIA's announcements at Supercomputing 2016.
NVIDIA founder and CEO Jensen Huang took the stage in Munich — one of the hubs of the global auto industry — to introduce a powerful new AI computer for fully autonomous vehicles and a new VR application for those who design them.
In this special edition of "This week in Data Science," we focus on the top 5 sessions for data scientists from GTC 2019, with links to the free sessions available on demand.
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the first remote GPU Hackathons, a complete schedule of upcoming events, using OpenACC for a biophysics problem, NVIDIA HPC SDK, GCC 10, new resources and more!
NVIDIA's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
At the 2018 GPU Technology Conference in Silicon Valley, NVIDIA CEO Jensen Huang announced the new "double-sized" 32GB Volta GPU; unveiled the NVIDIA DGX-2, the power of 300 servers in a box; showed an expanded inference platform with TensorRT 4 and Kubernetes on NVIDIA GPU; and revealed the NVIDIA GPU Cloud registry with 30 GPU-optimized containers and made it available from more cloud service providers. GTC attendees also got a sneak peek of the latest NVIDIA DRIVE software stack and the next DRIVE AI car computer, "Orin," along with developments in the NVIDIA Isaac platform for robotics and Project Clara, NVIDIA's medical imaging supercomputer.
Get updates about OpenACC. This month focuses on: A new OpenACC Online Course, book and number of exciting events highlighted in the OpenACC September Update
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming NVIDIA GTC 2019, complete schedule of GPU hackathons and more!
Read updates highlighting what’s hot in high performance computing, with this week's edition focusing on news of NVIDIA's announcements at Supercomputing 2016.
NVIDIA founder and CEO Jensen Huang took the stage in Munich — one of the hubs of the global auto industry — to introduce a powerful new AI computer for fully autonomous vehicles and a new VR application for those who design them.
In this special edition of "This week in Data Science," we focus on the top 5 sessions for data scientists from GTC 2019, with links to the free sessions available on demand.
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the first remote GPU Hackathons, a complete schedule of upcoming events, using OpenACC for a biophysics problem, NVIDIA HPC SDK, GCC 10, new resources and more!
Computação de Alto Desempenho - Fator chave para a competitividade do País, d...Igor José F. Freitas
Vídeo: https://www.youtube.com/watch?v=8cFqNwhQ7uE
Fator chave para a competitividade do País, da Ciência e da Indústria.
Palestra ministrada durante o Intel Innovation Week 2015 .
OpenACC and Open Hackathons Monthly Highlights: July 2022.pptxOpenACC
Stay up-to-date with the OpenACC and Open Hackathons Monthly Highlights. July’s edition covers the 2022 OpenACC and Hackathons Summit, NVIDIA’s Applied Research Accelerator Program, upcoming Open Hackathons and Bootcamps, recent research, new resources, and more!
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers working on applications for the new Frontier supercomputer, using OpenACC for weather forecasting, upcoming GPU Hackathons and Bootcamps, and new resources!
Stay up-to-date with the OpenACC Monthly Highlights. July's edition covers the OpenACC Summit 2021, upcoming GPU Hackathons and Bootcamps, PEARC21 panel review , recent research, new resources and more!
Stay up-to-date with the OpenACC Monthly Highlights. February's edition covers the updated specification OpenACC 3.2, upcoming GPU Hackathons and Bootcamps, OpenACC's BOF at SC21 , recent research, new resources and more!
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the Organization's newly elected president, an updated OpenACC 3.1 specification, upcoming 2021 GPU Hackathons, new resources and more!
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the on-demand sessions from the OpenACC Summit 2020, upcoming GPU Hackathons and Bootcamps, an OpenACC-to-FPGA framework, the NERSC GPU Hackathon, new resources and more!
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2JrUYLl.
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. She details current state-of-the-art research & recent internal work combining robotics with virtual reality & reinforcement learning in an end-to-end simulator for training and testing robots. Filmed at qconlondon.com.
Alison Lowndes is responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She consults on a wide range of AI applications, including planetary defence with NASA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world.
OpenACC and Open Hackathons Monthly Highlights: September 2022.pptxOpenACC
Stay up-to-date on the latest news, research and resources. This month's edition covers the Princeton GPU Hackathon, OpenACC at SC22, updates from GNU Tools Cauldron, the upcoming UK DPU Hackathon, relevant research and more!
ASSESSING THE PERFORMANCE AND ENERGY USAGE OF MULTI-CPUS, MULTI-CORE AND MANY...ijdpsjournal
This paper studies the performance and energy consumption of several multi-core, multi-CPUs and manycore
hardware platforms and software stacks for parallel programming. It uses the Multimedia Multiscale
Parser (MMP), a computationally demanding image encoder application, which was ported to several
hardware and software parallel environments as a benchmark. Hardware-wise, the study assesses
NVIDIA's Jetson TK1 development board, the Raspberry Pi 2, and a dual Intel Xeon E5-2620/v2 server, as
well as NVIDIA's discrete GPUs GTX 680, Titan Black Edition and GTX 750 Ti. The assessed parallel
programming paradigms are OpenMP, Pthreads and CUDA, and a single-thread sequential version, all
running in a Linux environment. While the CUDA-based implementation delivered the fastest execution, the
Jetson TK1 proved to be the most energy efficient platform, regardless of the used parallel software stack.
Although it has the lowest power demand, the Raspberry Pi 2 energy efficiency is hindered by its lengthy
execution times, effectively consuming more energy than the Jetson TK1. Surprisingly, OpenMP delivered
twice the performance of the Pthreads-based implementation, proving the maturity of the tools and
libraries supporting OpenMP.
ASSESSING THE PERFORMANCE AND ENERGY USAGE OF MULTI-CPUS, MULTI-CORE AND MANY...ijdpsjournal
This paper studies the performance and energy consumption of several multi-core, multi-CPUs and manycore
hardware platforms and software stacks for parallel programming. It uses the Multimedia Multiscale
Parser (MMP), a computationally demanding image encoder application, which was ported to several
hardware and software parallel environments as a benchmark. Hardware-wise, the study assesses
NVIDIA's Jetson TK1 development board, the Raspberry Pi 2, and a dual Intel Xeon E5-2620/v2 server, as
well as NVIDIA's discrete GPUs GTX 680, Titan Black Edition and GTX 750 Ti. The assessed parallel
programming paradigms are OpenMP, Pthreads and CUDA, and a single-thread sequential version, all
running in a Linux environment. While the CUDA-based implementation delivered the fastest execution, the
Jetson TK1 proved to be the most energy efficient platform, regardless of the used parallel software stack.
Although it has the lowest power demand, the Raspberry Pi 2 energy efficiency is hindered by its lengthy
execution times, effectively consuming more energy than the Jetson TK1. Surprisingly, OpenMP delivered
twice the performance of the Pthreads-based implementation, proving the maturity of the tools and
libraries supporting OpenMP.
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AIinside-BigData.com
Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe.
"With rapid rise and increase of Big Data and AI as a new breed of high-performance workloads on supercomputers, we need to accommodate them at scale, and thus the need for R&D for HW and SW Infrastructures where traditional simulation-based HPC and BD/AI would converge, in a BYTES-oriented fashion. Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community.
Rather than to focus on double precision flops that are of lesser utility, rather Post-K, especially its Arm64fx processor and the Tofu-D network is designed to sustain extreme bandwidth on realistic applications including those for oil and gas, such as seismic wave propagation, CFD, as well as structural codes, besting its rivals by several factors in measured performance. Post-K is slated to perform 100 times faster on some key applications c.f. its predecessor, the K-Computer, but also will likely to be the premier big data and AI/ML infrastructure. Currently, we are conducting research to scale deep learning to more than 100,000 nodes on Post-K, where we would obtain near top GPU-class performance on each node."
Watch the video: https://wp.me/p3RLHQ-k6G
Learn more: https://en.wikichip.org/wiki/supercomputers/post-k
and
http://hpcuserforum.com
OpenACC and Open Hackathons Monthly Highlights August 2022OpenACC
Stay up-to-date with the OpenACC and Open Hackathons Monthly Highlights. August’s edition covers the 2022 OpenACC and Hackathons Asia-Pacific Summit, NVIDIA’s GTC, upcoming Open Hackathons and Bootcamps, EuroHPC, the launch of Frontier and Polaris supercomputers, recent research, new resources, and more!
We pioneered accelerated computing to tackle challenges no one else can solve. Now, the AI moment has arrived. Discover how our work in AI and the metaverse is profoundly impacting society and transforming the world’s largest industries.
Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds at GTC 2022.
Our passion is to inspire and enable the da Vincis and Einsteins of our time, so they can see and create the future. We pioneered graphics, accelerated computing, and AI to tackle challenges ordinary computers cannot solve. See how we're continuously inventing the future--from our early days as a chip maker to transformers of the Metaverse.
Outlining a sweeping vision for the “age of AI,” NVIDIA CEO Jensen Huang Monday kicked off the GPU Technology Conference.
Huang made major announcements in data centers, edge AI, collaboration tools and healthcare in a talk simultaneously released in nine episodes, each under 10 minutes.
“AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” Huang said, standing in front of the stove of his Silicon Valley home.
Behind a series of announcements touching on everything from healthcare to robotics to videoconferencing, Huang’s underlying story was simple: AI is changing everything, which has put NVIDIA at the intersection of changes that touch every facet of modern life.
More and more of those changes can be seen, first, in Huang’s kitchen, with its playful bouquet of colorful spatulas, that has served as the increasingly familiar backdrop for announcements throughout the COVID-19 pandemic.
“NVIDIA is a full stack computing company – we love working on extremely hard computing problems that have great impact on the world – this is right in our wheelhouse,” Huang said. “We are all-in, to advance and democratize this new form of computing – for the age of AI.”
This GTC is one of the biggest yet. It features more than 1,000 sessions—400 more than the last GTC—in 40 topic areas. And it’s the first to run across the world’s time zones, with sessions in English, Chinese, Korean, Japanese, and Hebrew.
The Best of AI and HPC in Healthcare and Life SciencesNVIDIA
Trends. Success stories. Training. Networking.
The GPU Technology Conference brings this all to one place. Meet the people pioneering the future of healthcare and life sciences and learn how to apply the latest AI and HPC tools to your research.
NVIDIA BioBert, an optimized version of BioBert was created specifically for biomedical and clinical domains, providing this community easy access to state-of-the-art NLP models.
Top 5 Deep Learning and AI Stories - August 30, 2019NVIDIA
Read the top five news stories in artificial intelligence and learn how innovations in AI are transforming business across industries like healthcare and finance and how your business can derive tangible benefits by implementing AI the right way.
Seven Ways to Boost Artificial Intelligence ResearchNVIDIA
Higher education institutions have long been the backbone of scientific breakthroughs, view this slideshare to learn seven easy ways to help elevate your research.
Learn about the benefits of joining the NVIDIA Developer Program and the resources available to you as a registered developer. This slideshare also provides the steps of getting started in the program as well as an overview of the developer engagement platforms at your disposal. developer.nvidia.com/join
If you were unable to attend GTC 2019 or couldn't make it to all of the sessions you had on your list, check out the top four DGX POD sessions from the conference on-demand.
This Week in Data Science - Top 5 News - April 26, 2019NVIDIA
What's new in data science? Flip through this week's Top 5 to read a report on the most coveted skills for data scientists, top universities building AI labs, data science workstations for AI deployment, and more.
NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2019 (#GTC19) in Silicon Valley, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and much more.
Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.
Transforming Healthcare at GTC Silicon ValleyNVIDIA
The GPU Technology Conference (GTC) brings together the leading minds in AI and healthcare that are driving advances in the industry - from top radiology departments and medical research institutions to the hottest startups from around the world. Can't miss panels and trainings at GTC Silicon Valley
The promise of AI to provide better patient care through accelerated workflows and increased diagnostic capabilities was in full display at RSNA. Catch up with all the news and highlights from the event.
Top 5 Deep Learning and AI Stories - November 30, 2018NVIDIA
Read this week's top 5 news updates in deep learning and AI: 75 healthcare companies partner with NVIDIA to power the future of radiology, NeurIPS conference showcases the latest in AI research, NVIDIA's new research lab pushes machine learning boundaries, Israeli AI startup restores speech abilities to stroke victims and others with impaired language, and radiologists can detect anomalies in medical images with deep learning.
Top 5 AI and Deep Learning Stories - November 9, 2018NVIDIA
Read this week's top 5 news updates in deep learning and AI: DGX-2 supercomputers arrive fueling scientific discovery; AI pioneer talks about the future of AI; radiology poised for transformation with AI; the rise of AI developers in India; discover AI in federal government.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
4. HERE ARE THE “TOP FIVE’ STORIES
HIGHLIGHTING WHAT’S HOT IN HPC AND AI
TOP 5
5. TOP 5
1. Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud
2. Fujitsu Boosts RIKEN AI Supercomputer to 54 PetaFLOPS
3. More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete
4. The Buck Stops – And Starts- Here for GPU Compute
5. Boston University Researchers use AI to Detect Kidney Disease
6. WHY UIUC BUILT HPC APPLICATION CONTAINERS FOR
NVIDIA GPU CLOUD
“Containers are a way of packaging up an application and
all of its dependencies in such a way that you can install
them collectively on a cloud instance or a workstation or a
compute node. And it doesn’t require the typical amount
of system administration skills and involvement to put one
of these containers on a machine. And within the container
image in a manner that’s roughly similar to what you have
in a virtual machine, the user can change anything they
want. So you have an entire sort of operating system
snapshot is what it looks like on the inside. So you can
customize the layout of the file system and do all kinds of
other things that would otherwise involve getting a lot of
permission and cooperation, particularly in large
computing installations.”
1
VIDEO ARTICLE
7. FUJITSU BOOSTS RIKEN AI SUPERCOMPUTER TO 54
PETAFLOPS
2
Fujitsu has performed a massive upgrade to RIKEN’s RAIDEN
supercomputer using NVIDIA DGX-1 servers outfitted with the
latest V100 Tesla GPUs.
RAIDEN was originally deployed in 2017 using 24 of NVIDIA’s
first-generation DGX-1 servers, each of which were powered
by eight P100 GPUs. Together, set of servers delivered four
half-precision petaflops for deep learning applications. With
the upgrade, those original servers were replaced with 54
DGX-1 boxes, using the newest V100 GPUs. Since the V100
has the special Tensor Core circuitry specifically designed for
neural network processing (125 teraflops of mixed precision
floating point operations per device), the upgraded system
will offer a whopping 54 petaflops of deep learning
performance.
ARTICLE
8. MORE POWER, LESS TOWER:
AI MAY MAKE AIRCRAFT CONTROL TOWERS OBSOLETE
3
BLOG
Airport control towers are an emblem of the
aviation industry. A Canadian company wants to
use its technology to make them a relic of the
past.
Airport buffs may mourn the change. But
Ontario-based Searidge Technologies believes its
reasoning is, um, well-grounded.
It believes AI-powered video systems can better
watch runways, taxiways and gate areas. By
“seeing” airport operations through as many as
200 cameras, there’s no need for the sightline
towers give air traffic controllers.
9. THE BUCK STOPS – AND STARTS – HERE FOR GPU
COMPUTE
We have done surveys of the HPC codes, and about 70 percent of
the processing cycles of the HPC centers is dominated by 15
different applications. This small number of applications
dominate compute time. We have been focused on accelerating
those applications first as well as making them run well. There
are hundreds of accelerated applications, but some of them are
just moving to GPUs and they are going to take a while. You do
have to know your workloads, and at every HPC center, all that
we ask is that they look at their applications and first of all
make sure they are on the most recent versions of the code,
many of which have been accelerated by GPUs. On average, if
you compare a server with four Volta GPUs against a best-in-class
server with two “Skylake” Xeon SP processors, the average speed
up across a basket of HPC applications is 20X. - Ian Buck
4
ARTICLE
10. BOSTON UNIVERSITY RESEARCHERS USE AI TO DETECT
KIDNEY DISEASE
5
Researchers at Boston University developed a deep
learning algorithm that can assess kidney disease
with better accuracy than trained pathologists.
Detecting kidney damage is of great importance, and
unlike many other diseases, symptoms often don’t
appear until the disease is very advanced. Getting
this diagnosis wrong can lead to a series of life-
threatening conditions.
“This rapid, scalable method can be deployable in
the form of software at the point of care, and holds
the potential for substantial clinical impact,
including augmenting clinical decision making for
nephrologists,” the team wrote in their research
paper.
BLOG