Enviar búsqueda
Cargar
Hekovnik 3 Mesece Kasnenje
•
Descargar como PPT, PDF
•
0 recomendaciones
•
285 vistas
Simon Belak
Seguir
Tecnología
Denunciar
Compartir
Denunciar
Compartir
1 de 22
Descargar ahora
Recomendados
Presentation of the Project done in the NEAR Lab (Embry-Riddle Aeronautical University) in 2013. The project was the study and the test of Tornado Web Server and NoSQL Database Management System (Redis & MongoDB)
Project Presentation 2013 NEAR Lab
Project Presentation 2013 NEAR Lab
Quentin Petit
Metahevristike
Metahevristike
Simon Belak
Razumevanje Naravnega Jezika, Tekst, Kontekst
Razumevanje Naravnega Jezika, Tekst, Kontekst
Simon Belak
Family
Family
glennhayahay
Prgišče Lispa
Prgišče Lispa
Simon Belak
O Filozofih In Programih
O Filozofih In Programih
Simon Belak
What will building the future take and how we can make better tools to get there.
Tools for building the future
Tools for building the future
Simon Belak
Having programmers do data science is terrible, if only everyone else were not even worse. The problem is of course tools. We seem to have settled on either: a bunch of disparate libraries thrown into a more or less agnostic IDE, or some point-and-click wonder which no matter how glossy, never seems to truly fit our domain once we get down to it. The dual lisp tradition of grow-your-own-language and grow-your-own-editor gives me hope there is a third way. This presentation is a meditation on how I approach data problems with Clojure, what I believe the process of doing data science should look like and the tools needed to get there. Some already exist (or can at least be bodged together); others can be made with relative ease (and we are already working on some of these); but a few will take a lot more hammock time. Clojure is fantastic for data manipulation and rapid prototyping, but falls short when it comes to communicating your insights. What is lacking are good visualization libraries and (shareable) notebook-like environments. I'll show my workflow in org-babel which weaves Clojure with R (for ggplot) and Python (for scikit-learn) and tell you why it's wrong, how IPythons of the world have trapped us in a local maximum and how we need a reconceptualization similar to what a REPL does to programming. All this interposed with my experience doing data science with Clojure (everything from ETL to on-the-spot analysis during a brainstorming).
Doing data science with clojure
Doing data science with clojure
Simon Belak
Recomendados
Presentation of the Project done in the NEAR Lab (Embry-Riddle Aeronautical University) in 2013. The project was the study and the test of Tornado Web Server and NoSQL Database Management System (Redis & MongoDB)
Project Presentation 2013 NEAR Lab
Project Presentation 2013 NEAR Lab
Quentin Petit
Metahevristike
Metahevristike
Simon Belak
Razumevanje Naravnega Jezika, Tekst, Kontekst
Razumevanje Naravnega Jezika, Tekst, Kontekst
Simon Belak
Family
Family
glennhayahay
Prgišče Lispa
Prgišče Lispa
Simon Belak
O Filozofih In Programih
O Filozofih In Programih
Simon Belak
What will building the future take and how we can make better tools to get there.
Tools for building the future
Tools for building the future
Simon Belak
Having programmers do data science is terrible, if only everyone else were not even worse. The problem is of course tools. We seem to have settled on either: a bunch of disparate libraries thrown into a more or less agnostic IDE, or some point-and-click wonder which no matter how glossy, never seems to truly fit our domain once we get down to it. The dual lisp tradition of grow-your-own-language and grow-your-own-editor gives me hope there is a third way. This presentation is a meditation on how I approach data problems with Clojure, what I believe the process of doing data science should look like and the tools needed to get there. Some already exist (or can at least be bodged together); others can be made with relative ease (and we are already working on some of these); but a few will take a lot more hammock time. Clojure is fantastic for data manipulation and rapid prototyping, but falls short when it comes to communicating your insights. What is lacking are good visualization libraries and (shareable) notebook-like environments. I'll show my workflow in org-babel which weaves Clojure with R (for ggplot) and Python (for scikit-learn) and tell you why it's wrong, how IPythons of the world have trapped us in a local maximum and how we need a reconceptualization similar to what a REPL does to programming. All this interposed with my experience doing data science with Clojure (everything from ETL to on-the-spot analysis during a brainstorming).
Doing data science with clojure
Doing data science with clojure
Simon Belak
Musings on exploratory analysis, how and what of that can be automated, and what that means for our entire workflow
Exploratory analysis
Exploratory analysis
Simon Belak
How to design your ETL process, data warehouse, how to model your data, and how analyze it.
Levelling up your data infrastructure
Levelling up your data infrastructure
Simon Belak
Recommendation algorithms and their variations such as ranking are the most common way for machine learning to find its way into a product where it is not the main focus. In this talk we’ll dig into the subtleties of making recommendation algorithms a seamless and integral part of your UX (goal: it should completely fade into the background. The user should not be aware she’s interacting with any kind of machine learning, it should just feel right, perhaps smart or even a tad like cheating); how to solve the cold start problem (and having little training data in general); and how to effectively collect feedback data. I’ll be drawing from my experiences building Metabase, an open source analytics/BI tool, where we extensively use recommendations and ranking to keep users in a state of flow when exploring data; to help with discoverability; and as a way to gently teach analysis and visualization best practices; all on the way towards building an AI data scientist.
The subtle art of recommendation
The subtle art of recommendation
Simon Belak
First steps in doing analytics using Metabase.
Metabase Ljubljana Meetup #2
Metabase Ljubljana Meetup #2
Simon Belak
Things to consider when setting up your ETL pipeline, data warehouse, and choosing an analytics tool.
Metabase lj meetup
Metabase lj meetup
Simon Belak
In this talk we will look at how to efficiently (in both space and time) summarize large, potentially unbounded, streams of data by approximating the underlying distribution using so-called sketch algorithms. The main approach we are going to be looking at is summarization via histograms. Histograms have a number of desirable properties: they work well in an on-line setting, are embarrassingly parallel, and are space-bound. Not to mention they capture the entire (empirical) distribution which is something that otherwise often gets lost when doing descriptive statistics. Building from that we will delve into related problems of sampling in a stream setting, and updating in a batch setting; and highlight some cool tricks such as capturing time-dynamics via data snapshotting. To finish off we will touch upon algorithms to summarize categorical data, most notably count-min sketch.
Sketch algorithms
Sketch algorithms
Simon Belak
Transducers -- composable algorithmic transformation decoupled from input or output sources -- are Clojure’s take on data transformation. In this talk we will look at what makes a transducer; push their composability to the limit chasing the panacea of building complex single-pass transformations out of reusable components (eg. calculating a bunch of descriptive statistics like sum, sum of squares, mean, variance, ... in a single pass without resorting to a spaghetti ball fold); explore how the fact they are decoupled from input and output traversal opens up some interesting possibilities as they can be made to work in both online and batch settings; all drawing from practical examples of using Clojure to analize “awkward-size” data.
Transducing for fun and profit
Transducing for fun and profit
Simon Belak
You have defined your metrics, setup dashboards, and started to incorporate data into your everyday. Great, but I have some bad news for you. Almost certainly some of you metrics are wrong. At best these mistakes mean that you are not getting all the insights you could have, at worse some of the conclusions you have drawn from them are wrong. In this talk we will go through the most common but pernicious mistakes and unravel the mechanisms behind them so by the end of the talk you will be equipped with an analytical toolset to spot them on your own. The main classes of errors we will cover are: viewing data as a static process; not considering error margins and variance; picking the wrong reference point; assuming your population is homogeneous; and improperly accounting for costs.
Your metrics are wrong
Your metrics are wrong
Simon Belak
Writing correct smart contract is hard (a recent study estimated that 3% of Ethereum contracts in the wild have some sort of security vulnerability; we all know of the DAO and Parity exploits, …). There are two main reasons for this. First and foremost developing for the blockchain is quite different than what most programmers are used to. The level of concurrency is far beyond our (von Neumann) intuition and mental models. And you can’t stop and inspect running code like you can in other systems. Taken together blockchain is closer to a physical/living system than conventional software — a fact not reflected in the tools available. Compared to other domains our tooling and programming languages are somewhere between rudimentary and bad; and a far cry from where they would need to be to augment developers and help make programming for the blockchain less alien and less error prone. In this talk we will first unpack what makes programming for the blockchain hard, and what are the most common types of vulnerabilities and their causes. Then we will look at the state of art programming language research in correctness proving and programming massively concurrent systems; and how these can be applied to programming smart contracts; revisit some technologies from the past that didn’t get traction at the time, but are nevertheless worth studying; and finishing off by trying to imagine how programming for the blockchain should, and perhaps one day will, look like.
Writing smart contracts the sane way
Writing smart contracts the sane way
Simon Belak
Online statistical analysis using transducers and sketch algorithms. Don’t know what either is? You are going to learn something very cool (and perspective-changing) then. Know them, but want an experience report? Got you covered, fam.
Online statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithms
Simon Belak
OpenAI recently published a fun paper where they showed using evolution algorithms to train policy networks to perform on par with state of the art reinforcement deep learning. In this talk we’ll try to reimplement the main ideas in that paper using Neanderthal (blazing fast matrix and linear algebra computations) and Cortex (neural networks); make it massively distributed using Onyx; build a simulation environment using re-frame; and of course save our princess from no particular harm in our toy game example
Save the princess
Save the princess
Simon Belak
How to systematically open a new market where every step is supported by data, how to set up learning loops, and where to look for optimization opportunities.
Data driven going to market strategy
Data driven going to market strategy
Simon Belak
You can do cool and unexpected things if your entire type system is a first class citizen and accessible at runtime. With the introduction of spec, Clojure got its own distinct spin on a type system. Just as macros add another -time (runtime and compile time) where the full power of the language can be used, spec does to describing data. The result is an entire additional type system that is a first class citizen and accessible at runtime that facilitates validation, generative testing (a la QuickCheck), destructuring (pattern matching into deeply nested data), data macros (recursive transformations of data) and a pluginable error system. And then you can start building on top of it. The talk will be half introduction to spec and the ideas packed within it, and half experience report instrumenting 15k loc production codebase (primarily ETL and analytics) with spec.
Spec: a lisp-flavoured type system
Spec: a lisp-flavoured type system
Simon Belak
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a streaming data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; the inferences and automations that can be built on top of that; and how and why Clojure is a natural choice for tasks that involve a lot of data manipulation, touching both on functional programming and lisp-specifics such as code-is-data. We will look at how such an approach can be used to manage a data warehouse by automatically inferring materialized views from raw incoming data or other views based on a combination of heuristics, statistical analysis (seasonality, outlier removal, ...) and predefined ontologies. Doing so is a practical way to maintain a large number of views, increasing their availability and abstracting the complexity into declarative rules, rather than having an ETL pipeline with dozens or even hundreds of hand crafted tasks. The system described requires relatively little effort upfront but can easily grow with one's needs both in terms of scale as well as scope. With its good introspection capabilities and strong decoupling it is for instance an excellent substrate for putting machine learning algorithms in production, which is the final use-case we will dive into.
A data layer in clojure
A data layer in clojure
Simon Belak
Segmentacija je ključna za učinkovito nagovarjanje in konvertiranje potancialnih strank. Simon Belak, vodja analitike pri GoOptiju in transmedijski urednik pri kritičnem časopisu Tribuna, je razkril, kako odkrivati segmente iz podatkov. Po njegovih besedah je povsem neupravičeno, da je segmentacija povečini statična in narejena na slepo, neupoštevajoč podatke. V predavanju je predstavil aletrnativo: analitično delno avtomatično odkrivanje segmentov iz podatkov. Na konkretnih primerih je pokazal, kako preslikati podatke o interakcijah s strankami (obisk strani kot pokazatelji interesov, odgovori na ankete, vzorci premikanja po straneh, odpiranje emailov…) v model strank in nadaljeval z razdelitvijo v segmente. Simon je za konec izpostavil najpogostejše pasti in drobne trike za primere, ko imamo malo podatkov, ali so le-ti nejasni.
Odkrivanje segmentov iz podatkov
Odkrivanje segmentov iz podatkov
Simon Belak
@sbelak Simon Belak Using Onyx in anger Clojure has always been good at manipulating data. With the release of spec and Onyx ("masterless, cloud scale, fault tolerant, high performance distributed computation system") good became best. In this talk I will walk you through a data layer architecture build around Kafka an Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
Using Onyx in anger
Using Onyx in anger
Simon Belak
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
Spec + onyx
Spec + onyx
Simon Belak
Whenever a programming language comes out with a new feature, us smug lisp weenies shrug and point out how lisp had that in the early seventies; and if you look at the list of influences of a given language, there is bound to be a lisp in there. In this talk I will try to unpack what makes lisp special, why it is called programming programming language , how it changes one’s thinking, and how that thinking can be applied elsewhere.
Dao of lisp
Dao of lisp
Simon Belak
Successfully forecasting future demand is key in allowing GoOpti its low prices while isolating transport partners from risk. It this talk Simon Belak, Chief Data scientist at GoOpti, will take you through how he approaches forecasting and the lessons that he learned along the way. The focus is going to be on models that do not require excessive amounts of data, are legible and work well as part of a continuous process (rather than being a one-of problem).
Predicting the future with goopti
Predicting the future with goopti
Simon Belak
In this talk, you will discover how the 15k LOC codebase was implemented with spec so you don't have to (but probably should). Validation; testing; destructuring; composable “data macros” via conformers; we’ve tried spec in all its multifaceted glory. You will discover a distillation of lessons learned interspersed with musing on how spec alters development flow and one’s thinking.
Living with-spec
Living with-spec
Simon Belak
Más contenido relacionado
Más de Simon Belak
Musings on exploratory analysis, how and what of that can be automated, and what that means for our entire workflow
Exploratory analysis
Exploratory analysis
Simon Belak
How to design your ETL process, data warehouse, how to model your data, and how analyze it.
Levelling up your data infrastructure
Levelling up your data infrastructure
Simon Belak
Recommendation algorithms and their variations such as ranking are the most common way for machine learning to find its way into a product where it is not the main focus. In this talk we’ll dig into the subtleties of making recommendation algorithms a seamless and integral part of your UX (goal: it should completely fade into the background. The user should not be aware she’s interacting with any kind of machine learning, it should just feel right, perhaps smart or even a tad like cheating); how to solve the cold start problem (and having little training data in general); and how to effectively collect feedback data. I’ll be drawing from my experiences building Metabase, an open source analytics/BI tool, where we extensively use recommendations and ranking to keep users in a state of flow when exploring data; to help with discoverability; and as a way to gently teach analysis and visualization best practices; all on the way towards building an AI data scientist.
The subtle art of recommendation
The subtle art of recommendation
Simon Belak
First steps in doing analytics using Metabase.
Metabase Ljubljana Meetup #2
Metabase Ljubljana Meetup #2
Simon Belak
Things to consider when setting up your ETL pipeline, data warehouse, and choosing an analytics tool.
Metabase lj meetup
Metabase lj meetup
Simon Belak
In this talk we will look at how to efficiently (in both space and time) summarize large, potentially unbounded, streams of data by approximating the underlying distribution using so-called sketch algorithms. The main approach we are going to be looking at is summarization via histograms. Histograms have a number of desirable properties: they work well in an on-line setting, are embarrassingly parallel, and are space-bound. Not to mention they capture the entire (empirical) distribution which is something that otherwise often gets lost when doing descriptive statistics. Building from that we will delve into related problems of sampling in a stream setting, and updating in a batch setting; and highlight some cool tricks such as capturing time-dynamics via data snapshotting. To finish off we will touch upon algorithms to summarize categorical data, most notably count-min sketch.
Sketch algorithms
Sketch algorithms
Simon Belak
Transducers -- composable algorithmic transformation decoupled from input or output sources -- are Clojure’s take on data transformation. In this talk we will look at what makes a transducer; push their composability to the limit chasing the panacea of building complex single-pass transformations out of reusable components (eg. calculating a bunch of descriptive statistics like sum, sum of squares, mean, variance, ... in a single pass without resorting to a spaghetti ball fold); explore how the fact they are decoupled from input and output traversal opens up some interesting possibilities as they can be made to work in both online and batch settings; all drawing from practical examples of using Clojure to analize “awkward-size” data.
Transducing for fun and profit
Transducing for fun and profit
Simon Belak
You have defined your metrics, setup dashboards, and started to incorporate data into your everyday. Great, but I have some bad news for you. Almost certainly some of you metrics are wrong. At best these mistakes mean that you are not getting all the insights you could have, at worse some of the conclusions you have drawn from them are wrong. In this talk we will go through the most common but pernicious mistakes and unravel the mechanisms behind them so by the end of the talk you will be equipped with an analytical toolset to spot them on your own. The main classes of errors we will cover are: viewing data as a static process; not considering error margins and variance; picking the wrong reference point; assuming your population is homogeneous; and improperly accounting for costs.
Your metrics are wrong
Your metrics are wrong
Simon Belak
Writing correct smart contract is hard (a recent study estimated that 3% of Ethereum contracts in the wild have some sort of security vulnerability; we all know of the DAO and Parity exploits, …). There are two main reasons for this. First and foremost developing for the blockchain is quite different than what most programmers are used to. The level of concurrency is far beyond our (von Neumann) intuition and mental models. And you can’t stop and inspect running code like you can in other systems. Taken together blockchain is closer to a physical/living system than conventional software — a fact not reflected in the tools available. Compared to other domains our tooling and programming languages are somewhere between rudimentary and bad; and a far cry from where they would need to be to augment developers and help make programming for the blockchain less alien and less error prone. In this talk we will first unpack what makes programming for the blockchain hard, and what are the most common types of vulnerabilities and their causes. Then we will look at the state of art programming language research in correctness proving and programming massively concurrent systems; and how these can be applied to programming smart contracts; revisit some technologies from the past that didn’t get traction at the time, but are nevertheless worth studying; and finishing off by trying to imagine how programming for the blockchain should, and perhaps one day will, look like.
Writing smart contracts the sane way
Writing smart contracts the sane way
Simon Belak
Online statistical analysis using transducers and sketch algorithms. Don’t know what either is? You are going to learn something very cool (and perspective-changing) then. Know them, but want an experience report? Got you covered, fam.
Online statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithms
Simon Belak
OpenAI recently published a fun paper where they showed using evolution algorithms to train policy networks to perform on par with state of the art reinforcement deep learning. In this talk we’ll try to reimplement the main ideas in that paper using Neanderthal (blazing fast matrix and linear algebra computations) and Cortex (neural networks); make it massively distributed using Onyx; build a simulation environment using re-frame; and of course save our princess from no particular harm in our toy game example
Save the princess
Save the princess
Simon Belak
How to systematically open a new market where every step is supported by data, how to set up learning loops, and where to look for optimization opportunities.
Data driven going to market strategy
Data driven going to market strategy
Simon Belak
You can do cool and unexpected things if your entire type system is a first class citizen and accessible at runtime. With the introduction of spec, Clojure got its own distinct spin on a type system. Just as macros add another -time (runtime and compile time) where the full power of the language can be used, spec does to describing data. The result is an entire additional type system that is a first class citizen and accessible at runtime that facilitates validation, generative testing (a la QuickCheck), destructuring (pattern matching into deeply nested data), data macros (recursive transformations of data) and a pluginable error system. And then you can start building on top of it. The talk will be half introduction to spec and the ideas packed within it, and half experience report instrumenting 15k loc production codebase (primarily ETL and analytics) with spec.
Spec: a lisp-flavoured type system
Spec: a lisp-flavoured type system
Simon Belak
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a streaming data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; the inferences and automations that can be built on top of that; and how and why Clojure is a natural choice for tasks that involve a lot of data manipulation, touching both on functional programming and lisp-specifics such as code-is-data. We will look at how such an approach can be used to manage a data warehouse by automatically inferring materialized views from raw incoming data or other views based on a combination of heuristics, statistical analysis (seasonality, outlier removal, ...) and predefined ontologies. Doing so is a practical way to maintain a large number of views, increasing their availability and abstracting the complexity into declarative rules, rather than having an ETL pipeline with dozens or even hundreds of hand crafted tasks. The system described requires relatively little effort upfront but can easily grow with one's needs both in terms of scale as well as scope. With its good introspection capabilities and strong decoupling it is for instance an excellent substrate for putting machine learning algorithms in production, which is the final use-case we will dive into.
A data layer in clojure
A data layer in clojure
Simon Belak
Segmentacija je ključna za učinkovito nagovarjanje in konvertiranje potancialnih strank. Simon Belak, vodja analitike pri GoOptiju in transmedijski urednik pri kritičnem časopisu Tribuna, je razkril, kako odkrivati segmente iz podatkov. Po njegovih besedah je povsem neupravičeno, da je segmentacija povečini statična in narejena na slepo, neupoštevajoč podatke. V predavanju je predstavil aletrnativo: analitično delno avtomatično odkrivanje segmentov iz podatkov. Na konkretnih primerih je pokazal, kako preslikati podatke o interakcijah s strankami (obisk strani kot pokazatelji interesov, odgovori na ankete, vzorci premikanja po straneh, odpiranje emailov…) v model strank in nadaljeval z razdelitvijo v segmente. Simon je za konec izpostavil najpogostejše pasti in drobne trike za primere, ko imamo malo podatkov, ali so le-ti nejasni.
Odkrivanje segmentov iz podatkov
Odkrivanje segmentov iz podatkov
Simon Belak
@sbelak Simon Belak Using Onyx in anger Clojure has always been good at manipulating data. With the release of spec and Onyx ("masterless, cloud scale, fault tolerant, high performance distributed computation system") good became best. In this talk I will walk you through a data layer architecture build around Kafka an Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
Using Onyx in anger
Using Onyx in anger
Simon Belak
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
Spec + onyx
Spec + onyx
Simon Belak
Whenever a programming language comes out with a new feature, us smug lisp weenies shrug and point out how lisp had that in the early seventies; and if you look at the list of influences of a given language, there is bound to be a lisp in there. In this talk I will try to unpack what makes lisp special, why it is called programming programming language , how it changes one’s thinking, and how that thinking can be applied elsewhere.
Dao of lisp
Dao of lisp
Simon Belak
Successfully forecasting future demand is key in allowing GoOpti its low prices while isolating transport partners from risk. It this talk Simon Belak, Chief Data scientist at GoOpti, will take you through how he approaches forecasting and the lessons that he learned along the way. The focus is going to be on models that do not require excessive amounts of data, are legible and work well as part of a continuous process (rather than being a one-of problem).
Predicting the future with goopti
Predicting the future with goopti
Simon Belak
In this talk, you will discover how the 15k LOC codebase was implemented with spec so you don't have to (but probably should). Validation; testing; destructuring; composable “data macros” via conformers; we’ve tried spec in all its multifaceted glory. You will discover a distillation of lessons learned interspersed with musing on how spec alters development flow and one’s thinking.
Living with-spec
Living with-spec
Simon Belak
Más de Simon Belak
(20)
Exploratory analysis
Exploratory analysis
Levelling up your data infrastructure
Levelling up your data infrastructure
The subtle art of recommendation
The subtle art of recommendation
Metabase Ljubljana Meetup #2
Metabase Ljubljana Meetup #2
Metabase lj meetup
Metabase lj meetup
Sketch algorithms
Sketch algorithms
Transducing for fun and profit
Transducing for fun and profit
Your metrics are wrong
Your metrics are wrong
Writing smart contracts the sane way
Writing smart contracts the sane way
Online statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithms
Save the princess
Save the princess
Data driven going to market strategy
Data driven going to market strategy
Spec: a lisp-flavoured type system
Spec: a lisp-flavoured type system
A data layer in clojure
A data layer in clojure
Odkrivanje segmentov iz podatkov
Odkrivanje segmentov iz podatkov
Using Onyx in anger
Using Onyx in anger
Spec + onyx
Spec + onyx
Dao of lisp
Dao of lisp
Predicting the future with goopti
Predicting the future with goopti
Living with-spec
Living with-spec
Hekovnik 3 Mesece Kasnenje
1.
Hekovnik 3 mesece
kasneje [email_address]
2.
Hackerspace
3.
4.
5.
Hekovnik Igrišče tehnologij
6.
Stari Tehnološki park
7.
8.
No strings attached
9.
Danes
10.
2 sobi
11.
18 članov
12.
Zametek skupnost
13.
Delovne akcije
14.
Delavnice
15.
Predavanja
16.
Drobni obredi
17.
Jutri
18.
Čajarna/družabni prostor
19.
Laboratorij za HW
& robotiko
20.
21.
22.
Descargar ahora