The document provides a history of programming languages from the 1940s to the projected year 2100. It discusses early pioneers and the first programming language as well as the evolution of programming languages throughout each decade. Popular modern languages discussed include Python, Java, R, Julia, Lisp, JavaScript, C++, and Mojo in the context of artificial intelligence and machine learning. The document takes a high-level view of the evolution of programming over eight decades from the 1940s to 2000s and looks ahead to future trends.
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
Kamailio combined with Asterisk creates and incredibly robust and durable VoIP framework. With scalability and security, adding Kamailio to an asterisk deployment makes sense and saves money.
A quick introduction to Kamailio - the leading Open Source SIP server (based on OpenSER and SER). Kamailio is quite different than Asterisk, FreeSwitch and many other VoIP platforms - why is that and how do you start getting your head around Kamailio?
Scientists have recently explored the amazing discovery that many cells produce thousands of much smaller RNA molecules, micro RNAs. Instance, more than 500 different micro RNAs have been found in human cells alone.
Micro RNA plays an important role in post-transcriptional gene regulation, such as RISC, and can cause interference and shut down gene activity.
Micro RNA is a form of ribonucleic acid and does not contain genetic information.
Dynamische Routingprotokolle Aufzucht und Pflege - OSPFMaximilan Wilhelm
Herzlichen Glückwunsch! Sie dürfen ein Netzwerk mit mehr als 2 Routern administrieren. Dieser Vortrag erläutert, warum statisches Routing keine Lösung ist und schneller als einem lieb ist zum Problem werden kann. Als Einführung in dynamisches Routing und OSPF, erklärt dieser Vortrag wie sich Router gegenseitig finden, Routen austauschen, was eine Area ist und wie die Link-State Datenbank funktioniert.
OSPF wird praktisch am Beispiel des Bird Internet Routing Daemons und in Zusammenspiel mit klassischen Herstellern gezeigt.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-montgomery
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the "Building Complete Embedded Vision Systems on Linux—From Camera to Display" tutorial at the May 2019 Embedded Vision Summit.
There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such as NXP, Broadcom, TI and NVIDIA, at lower power and cost points than ever before. Testing vision algorithms is the first step, but what about the rest of your system? In this talk, Montgomery considers the best open-source components available today and explains how to select and integrate them to build complete video pipelines on Linux—from camera to display—while maximizing performance.
Montgomery examines and compares popular open-source libraries for vision, including Yocto, ffmpeg, gstreamer, V4L2, OpenCV, OpenVX, OpenCL and OpenGL. Which components do you need and why? He also summarizes the steps required to build and test complete video pipelines, common integration problems to avoid and how to work around issues to get the best performance possible on embedded systems.
This presentation introduces Data Plane Development Kit overview and basics. It is a part of a Network Programming Series.
First, the presentation focuses on the network performance challenges on the modern systems by comparing modern CPUs with modern 10 Gbps ethernet links. Then it touches memory hierarchy and kernel bottlenecks.
The following part explains the main DPDK techniques, like polling, bursts, hugepages and multicore processing.
DPDK overview explains how is the DPDK application is being initialized and run, touches lockless queues (rte_ring), memory pools (rte_mempool), memory buffers (rte_mbuf), hashes (rte_hash), cuckoo hashing, longest prefix match library (rte_lpm), poll mode drivers (PMDs) and kernel NIC interface (KNI).
At the end, there are few DPDK performance tips.
Tags: access time, burst, cache, dpdk, driver, ethernet, hub, hugepage, ip, kernel, lcore, linux, memory, pmd, polling, rss, softswitch, switch, userspace, xeon
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Kamailio combined with Asterisk creates and incredibly robust and durable VoIP framework. With scalability and security, adding Kamailio to an asterisk deployment makes sense and saves money.
A quick introduction to Kamailio - the leading Open Source SIP server (based on OpenSER and SER). Kamailio is quite different than Asterisk, FreeSwitch and many other VoIP platforms - why is that and how do you start getting your head around Kamailio?
Scientists have recently explored the amazing discovery that many cells produce thousands of much smaller RNA molecules, micro RNAs. Instance, more than 500 different micro RNAs have been found in human cells alone.
Micro RNA plays an important role in post-transcriptional gene regulation, such as RISC, and can cause interference and shut down gene activity.
Micro RNA is a form of ribonucleic acid and does not contain genetic information.
Dynamische Routingprotokolle Aufzucht und Pflege - OSPFMaximilan Wilhelm
Herzlichen Glückwunsch! Sie dürfen ein Netzwerk mit mehr als 2 Routern administrieren. Dieser Vortrag erläutert, warum statisches Routing keine Lösung ist und schneller als einem lieb ist zum Problem werden kann. Als Einführung in dynamisches Routing und OSPF, erklärt dieser Vortrag wie sich Router gegenseitig finden, Routen austauschen, was eine Area ist und wie die Link-State Datenbank funktioniert.
OSPF wird praktisch am Beispiel des Bird Internet Routing Daemons und in Zusammenspiel mit klassischen Herstellern gezeigt.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-montgomery
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the "Building Complete Embedded Vision Systems on Linux—From Camera to Display" tutorial at the May 2019 Embedded Vision Summit.
There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such as NXP, Broadcom, TI and NVIDIA, at lower power and cost points than ever before. Testing vision algorithms is the first step, but what about the rest of your system? In this talk, Montgomery considers the best open-source components available today and explains how to select and integrate them to build complete video pipelines on Linux—from camera to display—while maximizing performance.
Montgomery examines and compares popular open-source libraries for vision, including Yocto, ffmpeg, gstreamer, V4L2, OpenCV, OpenVX, OpenCL and OpenGL. Which components do you need and why? He also summarizes the steps required to build and test complete video pipelines, common integration problems to avoid and how to work around issues to get the best performance possible on embedded systems.
This presentation introduces Data Plane Development Kit overview and basics. It is a part of a Network Programming Series.
First, the presentation focuses on the network performance challenges on the modern systems by comparing modern CPUs with modern 10 Gbps ethernet links. Then it touches memory hierarchy and kernel bottlenecks.
The following part explains the main DPDK techniques, like polling, bursts, hugepages and multicore processing.
DPDK overview explains how is the DPDK application is being initialized and run, touches lockless queues (rte_ring), memory pools (rte_mempool), memory buffers (rte_mbuf), hashes (rte_hash), cuckoo hashing, longest prefix match library (rte_lpm), poll mode drivers (PMDs) and kernel NIC interface (KNI).
At the end, there are few DPDK performance tips.
Tags: access time, burst, cache, dpdk, driver, ethernet, hub, hugepage, ip, kernel, lcore, linux, memory, pmd, polling, rss, softswitch, switch, userspace, xeon
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Oplægget blev holdt ved et seminar i InfinIT-interessegruppen Højniveausprog til Indlejrede Systemer den 2. oktober 2013. Læs mere om interessegruppen her: http://infinit.dk/dk/interessegrupper/hoejniveau_sprog_til_indlejrede_systemer/hoejniveau_sprog_til_indlejrede_systemer.htm
Course: Programming Languages and Paradigms:
A brief introduction to imperative programming principles: history, von neumann, BNF, variables (r-values, l-values), modifiable data structures, order of evaluation, static and dynamic scopes, referencing environments, call by value, control flow (sequencing, selection, iteration), ...
Information about the level of programming language, types of programming language, the principal paradigms, few programming languages, criteria for good language.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
2. History of Programmers
• Before 1940s: The first programmers
• The 1940s: Von Neumann, Konard Zuse & PlankalKul
• The 1950s: The First Programming Language
• The 1960s: An Explosion in Programming languages
• The 1970s: Simplicity, Abstraction, Study
• The 1980s: Consolidation and New Directions
• The 1990s: Internet and the Web
• The 2000s: tbd
3. Early History: The First Programmer
• Jacquard loom of early 1800s
– Translated card patterns into cloth designs
• Charles Babbage’s analytical engine (1830s & 40s)
Programs were cards with data and operations
• Ada Lovelace – first programmer
“The engine can arrange and combine its numerical quantities
exactly as if they were letters or any other general symbols; And in
fact might bring out its results in algebraic notation, were provision
made.”
4. Jacquard loom of early 1800s Charles Babbage’s analytical engine (1830s & 40s) Ada Lovelace – first programmer
5. The 1940s: Von Neumann and Zuse
John Von Neumann
led a team that built
computers with stored
programs and a central
processor ENIAC.
6. Konrad Zuse and Plankalkul
Konrad Zuse began work on Plankalkul (plan
calculus), the first algorithmic programming
language, with an aim of creating the theoretical
preconditions for the formulation of problems
of a general nature.
Seven years earlier, Zuse had developed and
built the world's first binary digital computer,
the Z1. He completed the first fully functional
program-controlled electromechanical digital
computer, the Z3, in 1941.
Only the Z4, the most sophisticated of his
creations, survived World War II.
7. Machine Code (1940’s)
•Initial computers were programmed in raw machine code.
•These were entirely numeric.
•What was wrong with using machine code? Everything!
•Poor readability
•Poor modifiability
•Expression coding was tedious
•Inherit deficiencies of hardware, e.g., no indexing or floating point
numbers
8. Pseudocodes (1949)
•Short Code or SHORTCODE - John Mauchly, 1949.
•Pseudocode interpreter for math problems, on Eckert
and Mauchly’s BINAC and later on UNIVAC I and II.
•Possibly the first attempt at a higher level language.
•Expressions were coded, left to right
9. More Pseudocodes
Speed coding; 1953-4
• A pseudocode interpreter for math on IBM 701, IBM 650.
• Developed by John Backus
• Pseudo ops for arithmetic and math functions
• Conditional and unconditional branching
• Auto increment registers for array access
• Slow but still dominated by slowness of s/w math
• Interpreter left only 700 words left for user program
Laning and Zierler System – 1953
• Implemented on the MIT Whirlwind computer
• First "algebraic" compiler system
• Subscripted variables, function calls, expression translation
• Never ported to any other machine
10. The 1950s: The First Programming
Language
• Pseudocodes: interpreters for assembly language like
• Fortran: the first higher level programming language
• COBOL: he first business oriented language
• Algol: one of the most influential programming languages ever
designed
• LISP: the first language to depart from the procedural paradigm
• APL: A Programming Language
11. The 1960s: An Explosion in Programming
Languages
• The development of hundreds of programming languages
• PL/I designed in 1963-4
– supposed to be all purpose
– combined features of FORTRAN, COBOL and Algol 60 and more!
– translators were slow, huge and unreliable
– some say it was ahead of its time......
• Algol 68
• SNOBOL
• Simula
• BASIC
12. The 1970s: Simplicity, Abstraction, Study
• Algol-W - Nicklaus Wirth and C.A.R.Hoare
– reaction against 1960s
– simplicity
• Pascal
– small, simple, efficient structures
– for teaching program
• C - 1972 - Dennis Ritchie
– aims for simplicity by reducing restrictions of the type system
– allows access to underlying system
– interface with O/S - UNIX
13. The 1980s: Consolidation and New
Paradigms
• Ada
– US Department of Defence
– European team lead by Jean Ichbiah. (Sam Lomonaco was also on the ADA team :-)
• Functional programming
– Scheme, ML, Haskell
• Logic programming
– Prolog
• Object-oriented programming
– Smalltalk, C++, Eiffel
14. Functional Programming
• Common Lisp: consolidation of LISP dialects spurred practical use,
as did the development of Lisp Machines.
• Scheme: a simple and pure LISP like language used for teaching
programming.
• Logo: Used for teaching young children how to program.
• ML: (Meta Language) a strongly-typed functional language first
developed by Robin Milner in the 70’s
• Haskell: poly morphicly typed, lazy, purely functional language.
15. Small talk (1972-80)
•Developed at Xerox PARC by Alan Kay and colleagues (esp. Adele
Goldberg) inspired by Simula 67
•First compilation in 1972 was written on a bet to come up with "the
most powerful language in the world" in "a single page of code".
•In 1980, Smalltalk 80, a uniformly object-oriented programming
environment became available as the first commercial release of the
Smalltalk language
•Pioneered the graphical user interface everyone now uses
•Industrial use continues to the present day
16. C++ (1985)
•Developed at Bell Labs by Stroustrup
•Evolved from C and SIMULA 67
•Facilities for object-oriented programming, taken partially from SIMULA
67, added to C
•Also has exception handling
•A large and complex language, in part because it supports both procedural
and OO programming
•Rapidly grew in popularity, along with OOP
•ANSI standard approved in November, 1997
17. Eiffel
Eiffel - a related language that supports OOP
- (Designed by Bertrand Meyer - 1992)
- Not directly derived from any other language
- Smaller and simpler than C++, but still has most of the power
18. 1990’s: the Internet and Web
During the 90’s, Object-oriented languages (mostly C++)
became widely used in practical applications
The Internet and Web drove several phenomena:
– Adding concurrency and threads to existing languages
– Increased use of scripting languages such as Perl and Tcl/Tk
– Java as a new programming language
19. Java
• Developed at Sun in the early 1990s
with original goal of a language for
embedded computers
• Principals: Bill Joy, James Gosling, Mike
Sheradin, Patrick Naughton
• Original name, Oak, changed for copyright reasons
• Based on C++ but significantly simplified
• Supports only OOP
• Has references, but not pointers
• Includes support for applets and a form of concurrency (i.e. threads)
20. The future
• In the 60’s, the dream was a single all-purpose language
(e.g., PL/I, Algol)
• The 70s and 80s dream expressed by Winograd (1979)
“Just as high-level languages allow the programmer to escape the intricacies of
the machine, higher level programming systems can provide for manipulating
complex systems. We need to shift away from algorithms and towards the
description of the properties of the packages that we build. Programming
systems will be declarative not imperative”
• Will that dream be realised?
• Programming is not yet obsolete
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36. Currently Trending Programming
Languages
There are several programming languages that are commonly used in the fields of AI, ML, and Cybersecurity. Here are some of the most popular ones:
1. Python: Python is one of the most popular programming languages for AI and ML development due to its simple syntax and readability. It supports a variety of
frameworks and libraries, which allows for more flexibility and creates endless possibilities for an engineer to work with. Some of the most popular Python libraries
for machine learning include: sci-kit image, OpenCV, TensorFlow, PyTorch, Keras, NumPy, NLTK, SciPy, and sci-kit learn123.
2. Java: Java is a general-purpose programming language that is used for creating mobile, desktop, web, and cloud applications. It is also used for developing AI
systems. Java is known for its scalability, security, and cross-platform compatibility2.
3. R: R is a programming language that is used for statistical computing and graphics. It is widely used in data analysis, machine learning, and scientific research. R
has a large number of libraries and packages that make it easy to perform complex statistical analyses3.
4. Julia: Julia is a high-level, high-performance programming language that is designed for numerical and scientific computing. It is used for developing AI and ML
models, as well as for data analysis and visualization3.
5. Lisp: Lisp is a family of programming languages that are used for AI and ML development. Lisp is known for its powerful macro system, which allows developers
to extend the language itself. Lisp is also used for symbolic computing, which is a type of computing that deals with symbols and expressions3.
6. JavaScript: JavaScript is a programming language that is used for creating highly interactive browser-based applications. It is also used for developing AI systems.
JavaScript is known for its flexibility and ease of use2.
7. C++: C++ is a general-purpose programming language that is used for developing AI and ML models, as well as for developing operating systems, system software,
and embedded systems. C++ is known for its speed and efficiency2.
• Here are some PowerPoint presentations that you might find useful:
1. Machine Learning in Cyber Security - This presentation provides a holistic view of machine learning in cybersecurity for better organizational readiness.
2. AI and ML in Cybersecurity - This presentation discusses the limitations of machine learning and issues of explain ability, where deep learning should never be
applied, and examples of how the blind application of algorithms can lead to wrong results.
• Please note that the information provided is current as of January 2024 and may be subject to change.
37. Comparing the performance of Mojo,
Python, and JavaScript
Comparing the performance of Mojo, Python, and JavaScript in the context of
machine learning.
According to a Medium article, Python and Mojo are two popular programming
languages that have been widely used in various applications, from web
development to machine learning. While both Python and Mojo share some
similarities, they also have notable differences that set them apart. As a developer or
programmer, it’s essential to understand the fundamental differences between these
languages so that you can choose the one that best suits your needs.
Another Medium article compares the performance of JavaScript and Python for
machine learning. The article states that JavaScript’s computational performance is
still much better than Python’s. However, the maturity of the libraries — which
often have underlying modules written in C — means that operations on large
datasets can offer so much more than sheer computational power. But there is still a
place for JavaScript in machine learning.
38. Programming Languages for Civil
Engineering
• Civil and Structural engineering are fields that require a lot of computational power. Learning to code can
help engineers automate repetitive tasks, improve their workflow, and increase their productivity. According
to The Computational Engineer, the following programming languages are commonly used in the civil and
structural engineering industry 1:
1. Grasshopper: A visual programming language that can be easily adopted by civil and structural engineers. It
is a plugin to a CAD and 3D-modelling software called Rhinoceros. It has a low bar to entry but is powerful
enough to manage most of your workflows, including your Revit workflows.
2. Dynamo: A popular visual programming language for building and civil engineers. It is a plugin for Autodesk
Revit and can be used to automate repetitive tasks and improve workflows.
3. BHoM: A data structure and toolset for building and architecture that can be used to create custom
workflows and automate tasks.
4. C#: A general-purpose programming language that is widely used in the civil engineering industry. It is used
to develop software applications and tools for civil engineering projects.
• These languages have been designed with civil engineering workflows in mind and offer a lower bar to entry
for civil and structural engineers. They are also powerful enough to manage most of your workflows,
including your Revit workflows. If you are new to coding, Grasshopper is a great first language to learn as it
has an easy-to-adopt and debug interface 1.
39. Never Ever Ending Life History of
Programming Languages
• 18 New Programming Languages to Learn in 2024 | Built In