SlideShare a Scribd company logo
1 of 85
Download to read offline
Flux and InfluxDB 2.0
Paul Dix

@pauldix

paul@influxdata.com
• Data-scripting language

• Functional

• MIT Licensed

• Language, VM, engine, planner, optimizer
Language + Query Engine
2.0
Biggest Change Since 0.9
Clean Migration Path
Compatibility Layer
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
Consistent Documented API
Collection, Write/Query, Streaming & Batch Processing, Dashboards
Officially Supported Client
Libraries
Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
Visualization Libraries
Multi-tenant roles
• Operator

• Organization Administrator

• User
Data Model
• Organizations

• Buckets (retention)

• Time series data

• Tasks

• Runs

• Logs

• Dashboards

• Users

• Tokens

• Authorizations

• Protos (templates)

• Scrapers

• Telegrafs

• Labels
Ways to run Flux - (interpreter,
InfluxDB 1.7 & 2.0)
Flux Basics
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Comments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Named Arguments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
String Literals
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Buckets, not DBs
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Duration Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:2018-11-07T00:00:00Z)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Time Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Pipe forward operator
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Anonymous Function
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
and r.host == “serverA")
Predicate Function
// variables
some_int = 23
// variables
some_int = 23
some_float = 23.2
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
some_object = {foo: "hello" bar: 22}
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
This is potentially new
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
from(bucket:"foo")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "samples")
|> square()
|> filter(fn: (r) => r._value > 23.2)
Data Sources (inputs)
Data Sinks (outputs)
Tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
cron scheduling
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
packages & imports
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
map
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message) String interpolation
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Ship data elsewhere
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Store secrets in a
store like Vault
Open Questions
User Packages &
Dependencies
// in a file called package.flux
package "paul"
option version = "0.1.1"
// define square here or…
// in a file called package.flux
package "paul"
option version = "0.1.1"
// import the other package files
// they must have package "paul" declaration at the top
// only package.flux has the version
import “packages”
packages.load(files: ["square.flux", "utils.flux"])
// in a file called package.flux
package "paul"
option version = "0.1.1"
// import the other package files
// they must have package "paul" declaration at the top
// only package.flux has the version
import “packages”
packages.load(files: ["square.flux", "utils.flux"])
// or this
packages.load(glob: "*.flux")
import "myorg/paul" // latest, will load package.flux
data |> paul.square()
import "myorg/paul", "0.1.0" // specific version
// 1. look in $fluxhome/myorg/paul/package.flux
// 2. look in $fluxhome/myorg/paul/0.1.0/package.flux
// 3. look in cloud2.influxdata.com/api/v1/packages/myorg/paul
data |> paul.square()
import "myorg/paul", ">=0.1.0" // at least this version
data |> paul.square()
Error Handling?
import "slack"
// what if this returns an error?
ret = slack.send(room: "foo", message: "testing this", token: "...")
Option Types?
match ret {
// on match ret gets mapped as the new type
Error => {
// do something ret
},
Else => {
// do something with ret
}
}
Loops?
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
// compute the sum
sum = records
|> reduce(
fn: (r, accumulator) => r.value + accumulator,
i: 0
)
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
// compute the sum
sum = records
|> reduce(
fn: (r, accumulator) => r.value + accumulator,
i: 0
)
// get matching records
foos = records
|> filter(fn: (r) => r.name == "foo")
while(fn: () => {
// do stuff
})
while(fn: () => {
// do stuff
})
while = (fn) =>
if fn()
while(fn)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
loop = (fn, times) =>
loopUntil(fn, 0, times)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
loop = (fn, times) =>
loopUntil(fn, 0, times)
loopUntil = (fn, iteration, times) =>
if iteration < times {
fn(iteration)
loopUntil(fn, iteration + 1, times)
}
Syntactic Sugar
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// and here's an example
from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user",
2018-11-07:2018-11-08,
["_measurement", "_time", "_value", “_field”]]
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// and here's an example
from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user",
2018-11-07:2018-11-08,
["_measurement", "_time", "_value", “_field”]]
from(bucket:"foo")
|> filter(fn: (row) => row._measurement == "cpu" and row._field == "usage_user")
|> range(start: 2018-11-07, stop: 2018-11-08)
|> keep(columns: ["_measurement", "_time", "_value", “_field"])
from(bucket:"foo")[_measurement == "cpu"]
// notice the trailing commas can be left off
from(bucket: "foo")
|> filter(fn: (row) => row._measurement == "cpu")
|> last()
from(bucket:"foo")["some tag" == "asdf",,]
from(bucket: "foo")
|> filter(fn: (row) => row["some tag"] == "asdf")
|> last()
from(bucket:"foo")[foo=="bar",-1h]
from(bucket: "foo")
|> filter(fn: (row) => row.foo == "bar")
|> range(start: -1h)
bucket = "foo"
start = -3
from(bucket: bucket)
|> range(start: start, end: -1)
// shortcut if the variable name is the same as the argument
from(bucket)
|> range(start, end: -1)
Flux Office Hours Tomorrow
InfluxDB 2.0 Status
Thank you
Paul Dix

paul@influxdata.com

@pauldix

More Related Content

What's hot

What's hot (20)

Introduction to Prometheus
Introduction to PrometheusIntroduction to Prometheus
Introduction to Prometheus
 
Prometheus 101
Prometheus 101Prometheus 101
Prometheus 101
 
DEVOPS 에 대한 전반적인 소개 및 자동화툴 소개
DEVOPS 에 대한 전반적인 소개 및 자동화툴 소개DEVOPS 에 대한 전반적인 소개 및 자동화툴 소개
DEVOPS 에 대한 전반적인 소개 및 자동화툴 소개
 
Real-time Object Detection with YOLO v5, Hands-on-Lab
Real-time Object Detection with YOLO v5, Hands-on-LabReal-time Object Detection with YOLO v5, Hands-on-Lab
Real-time Object Detection with YOLO v5, Hands-on-Lab
 
Kubernetes Application Deployment with Helm - A beginner Guide!
Kubernetes Application Deployment with Helm - A beginner Guide!Kubernetes Application Deployment with Helm - A beginner Guide!
Kubernetes Application Deployment with Helm - A beginner Guide!
 
22nd Athens Big Data Meetup - 1st Talk - MLOps Workshop: The Full ML Lifecycl...
22nd Athens Big Data Meetup - 1st Talk - MLOps Workshop: The Full ML Lifecycl...22nd Athens Big Data Meetup - 1st Talk - MLOps Workshop: The Full ML Lifecycl...
22nd Athens Big Data Meetup - 1st Talk - MLOps Workshop: The Full ML Lifecycl...
 
How to Survive an OpenStack Cloud Meltdown with Ceph
How to Survive an OpenStack Cloud Meltdown with CephHow to Survive an OpenStack Cloud Meltdown with Ceph
How to Survive an OpenStack Cloud Meltdown with Ceph
 
PromQL Deep Dive - The Prometheus Query Language
PromQL Deep Dive - The Prometheus Query Language PromQL Deep Dive - The Prometheus Query Language
PromQL Deep Dive - The Prometheus Query Language
 
Infra as Code with Packer, Ansible and Terraform
Infra as Code with Packer, Ansible and TerraformInfra as Code with Packer, Ansible and Terraform
Infra as Code with Packer, Ansible and Terraform
 
Deep Dive into Keystone Tokens and Lessons Learned
Deep Dive into Keystone Tokens and Lessons LearnedDeep Dive into Keystone Tokens and Lessons Learned
Deep Dive into Keystone Tokens and Lessons Learned
 
Linux Profiling at Netflix
Linux Profiling at NetflixLinux Profiling at Netflix
Linux Profiling at Netflix
 
C++11やemscriptenと付き合って1年間の振り返り
C++11やemscriptenと付き合って1年間の振り返りC++11やemscriptenと付き合って1年間の振り返り
C++11やemscriptenと付き合って1年間の振り返り
 
[NDC17] Kubernetes로 개발서버 간단히 찍어내기
[NDC17] Kubernetes로 개발서버 간단히 찍어내기[NDC17] Kubernetes로 개발서버 간단히 찍어내기
[NDC17] Kubernetes로 개발서버 간단히 찍어내기
 
InfluxDB + Kepware: Start Monitoring Industrial Data Quickly
InfluxDB + Kepware: Start Monitoring Industrial Data QuicklyInfluxDB + Kepware: Start Monitoring Industrial Data Quickly
InfluxDB + Kepware: Start Monitoring Industrial Data Quickly
 
[오픈소스컨설팅] 프로메테우스 모니터링 살펴보고 구성하기
[오픈소스컨설팅] 프로메테우스 모니터링 살펴보고 구성하기[오픈소스컨설팅] 프로메테우스 모니터링 살펴보고 구성하기
[오픈소스컨설팅] 프로메테우스 모니터링 살펴보고 구성하기
 
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
 
Foreman in your datacenter
Foreman in your datacenterForeman in your datacenter
Foreman in your datacenter
 
Room 1 - 1 - Benoit TELLIER - On premise email inbound service with Apache James
Room 1 - 1 - Benoit TELLIER - On premise email inbound service with Apache JamesRoom 1 - 1 - Benoit TELLIER - On premise email inbound service with Apache James
Room 1 - 1 - Benoit TELLIER - On premise email inbound service with Apache James
 
Kubernetes meetup-tokyo-13-customizing-kubernetes-for-ml-cluster
Kubernetes meetup-tokyo-13-customizing-kubernetes-for-ml-clusterKubernetes meetup-tokyo-13-customizing-kubernetes-for-ml-cluster
Kubernetes meetup-tokyo-13-customizing-kubernetes-for-ml-cluster
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
 

Similar to Flux and InfluxDB 2.0 by Paul Dix

服务框架: Thrift & PasteScript
服务框架: Thrift & PasteScript服务框架: Thrift & PasteScript
服务框架: Thrift & PasteScript
Qiangning Hong
 
Wprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopWprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache Hadoop
Sages
 

Similar to Flux and InfluxDB 2.0 by Paul Dix (20)

9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | ... 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | ...
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
 
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
 
Artimon - Apache Flume (incubating) NYC Meetup 20111108
Artimon - Apache Flume (incubating) NYC Meetup 20111108Artimon - Apache Flume (incubating) NYC Meetup 20111108
Artimon - Apache Flume (incubating) NYC Meetup 20111108
 
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Monitoring Your ISP Using InfluxDB Cloud and Raspberry PiMonitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
 
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
 
Monitoring with Prometheus
Monitoring with PrometheusMonitoring with Prometheus
Monitoring with Prometheus
 
InfluxData Platform Future and Vision
InfluxData Platform Future and VisionInfluxData Platform Future and Vision
InfluxData Platform Future and Vision
 
Anti patterns
Anti patternsAnti patterns
Anti patterns
 
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Wprowadzenie do technologii Big Data / Intro to Big Data EcosystemWprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
 
ClojureScript loves React, DomCode May 26 2015
ClojureScript loves React, DomCode May 26 2015ClojureScript loves React, DomCode May 26 2015
ClojureScript loves React, DomCode May 26 2015
 
服务框架: Thrift & PasteScript
服务框架: Thrift & PasteScript服务框架: Thrift & PasteScript
服务框架: Thrift & PasteScript
 
Monitoring InfluxEnterprise
Monitoring InfluxEnterpriseMonitoring InfluxEnterprise
Monitoring InfluxEnterprise
 
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
 
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash courseCodepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
 
Optimizing the Grafana Platform for Flux
Optimizing the Grafana Platform for FluxOptimizing the Grafana Platform for Flux
Optimizing the Grafana Platform for Flux
 
RestMQ - HTTP/Redis based Message Queue
RestMQ - HTTP/Redis based Message QueueRestMQ - HTTP/Redis based Message Queue
RestMQ - HTTP/Redis based Message Queue
 
Presto anatomy
Presto anatomyPresto anatomy
Presto anatomy
 
Wprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopWprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache Hadoop
 
Job Queue in Golang
Job Queue in GolangJob Queue in Golang
Job Queue in Golang
 
Java/Scala Lab: Анатолий Кметюк - Scala SubScript: Алгебра для реактивного пр...
Java/Scala Lab: Анатолий Кметюк - Scala SubScript: Алгебра для реактивного пр...Java/Scala Lab: Анатолий Кметюк - Scala SubScript: Алгебра для реактивного пр...
Java/Scala Lab: Анатолий Кметюк - Scala SubScript: Алгебра для реактивного пр...
 

More from InfluxData

How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
InfluxData
 
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
InfluxData
 
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
InfluxData
 

More from InfluxData (20)

Announcing InfluxDB Clustered
Announcing InfluxDB ClusteredAnnouncing InfluxDB Clustered
Announcing InfluxDB Clustered
 
Best Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow EcosystemBest Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow Ecosystem
 
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
 
Power Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDBPower Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDB
 
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
 
Build an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING StackBuild an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING Stack
 
Meet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using RustMeet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using Rust
 
Introducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud DedicatedIntroducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud Dedicated
 
Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB
 
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
 
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
 
Introducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage EngineIntroducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage Engine
 
Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena
 
Understanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage EngineUnderstanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage Engine
 
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDBStreamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
 
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
 
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
 
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Recently uploaded (20)

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 

Flux and InfluxDB 2.0 by Paul Dix

  • 1. Flux and InfluxDB 2.0 Paul Dix @pauldix paul@influxdata.com
  • 2.
  • 3. • Data-scripting language • Functional • MIT Licensed • Language, VM, engine, planner, optimizer
  • 5.
  • 6.
  • 7. 2.0
  • 11. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 12. Consistent Documented API Collection, Write/Query, Streaming & Batch Processing, Dashboards
  • 13.
  • 14. Officially Supported Client Libraries Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
  • 16. Multi-tenant roles • Operator • Organization Administrator • User
  • 17. Data Model • Organizations • Buckets (retention) • Time series data • Tasks • Runs • Logs • Dashboards • Users • Tokens • Authorizations • Protos (templates) • Scrapers • Telegrafs • Labels
  • 18.
  • 19. Ways to run Flux - (interpreter, InfluxDB 1.7 & 2.0)
  • 20.
  • 22. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  • 23. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  • 24. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  • 25. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  • 26. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  • 27. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  • 28. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  • 29. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  • 30. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  • 31. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  • 33. // variables some_int = 23 some_float = 23.2
  • 34. // variables some_int = 23 some_float = 23.2 some_string = “cpu"
  • 35. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h
  • 36. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00
  • 37. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22]
  • 38. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22] some_object = {foo: "hello" bar: 22}
  • 39. // defining a pipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value})
  • 40. // defining a pipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value}) This is potentially new
  • 41. // defining a pipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value}) from(bucket:"foo") |> range(start: -1h) |> filter(fn: (r) => r._measurement == "samples") |> square() |> filter(fn: (r) => r._value > 23.2)
  • 44. Tasks
  • 45. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message)
  • 46. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) tasks
  • 47. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) cron scheduling
  • 48. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) packages & imports
  • 49. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) map
  • 50. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) String interpolation
  • 51. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Ship data elsewhere
  • 52. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Store secrets in a store like Vault
  • 55. // in a file called package.flux package "paul" option version = "0.1.1" // define square here or…
  • 56. // in a file called package.flux package "paul" option version = "0.1.1" // import the other package files // they must have package "paul" declaration at the top // only package.flux has the version import “packages” packages.load(files: ["square.flux", "utils.flux"])
  • 57. // in a file called package.flux package "paul" option version = "0.1.1" // import the other package files // they must have package "paul" declaration at the top // only package.flux has the version import “packages” packages.load(files: ["square.flux", "utils.flux"]) // or this packages.load(glob: "*.flux")
  • 58. import "myorg/paul" // latest, will load package.flux data |> paul.square()
  • 59. import "myorg/paul", "0.1.0" // specific version // 1. look in $fluxhome/myorg/paul/package.flux // 2. look in $fluxhome/myorg/paul/0.1.0/package.flux // 3. look in cloud2.influxdata.com/api/v1/packages/myorg/paul data |> paul.square()
  • 60. import "myorg/paul", ">=0.1.0" // at least this version data |> paul.square()
  • 62. import "slack" // what if this returns an error? ret = slack.send(room: "foo", message: "testing this", token: "...")
  • 64. match ret { // on match ret gets mapped as the new type Error => { // do something ret }, Else => { // do something with ret } }
  • 66. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ]
  • 67. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1})
  • 68. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1}) // compute the sum sum = records |> reduce( fn: (r, accumulator) => r.value + accumulator, i: 0 )
  • 69. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1}) // compute the sum sum = records |> reduce( fn: (r, accumulator) => r.value + accumulator, i: 0 ) // get matching records foos = records |> filter(fn: (r) => r.name == "foo")
  • 70. while(fn: () => { // do stuff })
  • 71. while(fn: () => { // do stuff }) while = (fn) => if fn() while(fn)
  • 72. // or loop some number of times loop(fn: (i) => { // do stuff here }, times: 10)
  • 73. // or loop some number of times loop(fn: (i) => { // do stuff here }, times: 10) loop = (fn, times) => loopUntil(fn, 0, times)
  • 74. // or loop some number of times loop(fn: (i) => { // do stuff here }, times: 10) loop = (fn, times) => loopUntil(fn, 0, times) loopUntil = (fn, iteration, times) => if iteration < times { fn(iteration) loopUntil(fn, iteration + 1, times) }
  • 77. // <stream object>[<predicate>,<time>:<time>,<list of strings>] // and here's an example from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user", 2018-11-07:2018-11-08, ["_measurement", "_time", "_value", “_field”]]
  • 78. // <stream object>[<predicate>,<time>:<time>,<list of strings>] // and here's an example from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user", 2018-11-07:2018-11-08, ["_measurement", "_time", "_value", “_field”]] from(bucket:"foo") |> filter(fn: (row) => row._measurement == "cpu" and row._field == "usage_user") |> range(start: 2018-11-07, stop: 2018-11-08) |> keep(columns: ["_measurement", "_time", "_value", “_field"])
  • 79. from(bucket:"foo")[_measurement == "cpu"] // notice the trailing commas can be left off from(bucket: "foo") |> filter(fn: (row) => row._measurement == "cpu") |> last()
  • 80. from(bucket:"foo")["some tag" == "asdf",,] from(bucket: "foo") |> filter(fn: (row) => row["some tag"] == "asdf") |> last()
  • 81. from(bucket:"foo")[foo=="bar",-1h] from(bucket: "foo") |> filter(fn: (row) => row.foo == "bar") |> range(start: -1h)
  • 82. bucket = "foo" start = -3 from(bucket: bucket) |> range(start: start, end: -1) // shortcut if the variable name is the same as the argument from(bucket) |> range(start, end: -1)
  • 83. Flux Office Hours Tomorrow