Publicidad
Publicidad

Más contenido relacionado

Presentaciones para ti(20)

Similar a Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi(20)

Publicidad

Más de InfluxData(20)

Último(20)

Publicidad

Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi

  1. April 7, 2021
  2. • Travel addict • Photography enthusiast • Tech interested • Robotics/Industrial engineer
  3. Why are we here?
  4. source cloud.google.com source edition.cnn.com
  5. What is going on at my place?
  6. Monitor internet speed day & night  Collect data  View data
  7. Monitor internet speed day & night  Collect data  View data … and what can I do if I’m not at home?
  8. NEED  Measure  Store  Visualize  Communication HOW-TO  Raspberry Pi  Spreadsheet  Spreadsheet  Not possible
  9. PRO  Easy to implement  Low effort to present data CONS  Local  Requires PC to open file  Writing data (concurrency)  SD card corruption Need to find a better solution!
  10. NEED  Measure  Store  Visualize  Communication HOW-TO  Raspberry Pi  InfluxDB  Grafana  Sense-hat  Telegram
  11. PRO  Nothing to store locally  Maintenance not on me!  Possibility to learn flux! CONS  Not much!! Time to switch to influxdb2!
  12.  Several options available  Needed ▪ PublicAPI ▪ Scriptable ▪ Debian-like compatible  Not needed ▪ 100% uptime ▪ Certification speedtest.net speedtest CLI
  13.  Time Series DB  Needed ▪ Free ▪ Cloud based ▪ Public API  Not needed ▪ 100% uptime ▪ Large storage influxdata cloud free tier
  14.  Dashboard  Needed ▪ Free ▪ Cloud based ▪ Nice-looking ▪ App based (desiderata)  Not needed ▪ 100% uptime ▪ Lots of dashboards Grafana cloud free
  15.  Somehow give fast feedback about my network  Needed ▪ Free ▪ Request based ▪ Easy to implement  Not needed ▪ 100% uptime telegram custom bot
  16. Let’s look at the implementation
  17. RaspberryPi #1 RaspberryPi #2 info: astro-pi.org
  18. INTERACTIVE $ speedtest Speedtest by Ookla Server: <Server Name> (id = <Server ID>) ISP: <ISP> Latency: 6.87 ms (1.95 ms jitter) Download: 77.37 Mbps (data used: 62.7 MB) Upload: 19.98 Mbps (data used: 9.0 MB) Packet Loss: 0.0% Result URL: https://www.speedtest.net/result/c/<UUID> SCRIPTED $ speedtest --format json { "type": "result", "timestamp": "2021-04-07T18:00:00Z", "ping": { "jitter": 0.696, "latency": 6.645 }, "download": { "bandwidth": 7662563, "bytes": 47087360, "elapsed": 5914 }, "upload": { "bandwidth": 2197125, "bytes": 7004736, "elapsed": 3607 }, "packetLoss": 0, "isp": "<ISP>", "interface": { "internalIp": "<LAN ip>", "name": "<LAN interface name>", "macAddr": "<LAN interface MAC>", "isVpn": false, "externalIp": "<Public IP>" }, "server": { "id": <Server ID>, "name": "<Server Name>", "location": "<Server Location>", "valcountry": "Italy", "host": "<Server Host>", "port": 8080, "ip": "<Server IP>" }, "result": { "id": "<UUID>", "url": https://www.speedtest.net/result/c/<UUID>} }
  19. [[outputs.influxdb_v2]] urls = [ "${SPEEDTEST_SERVERURL}" ] token = "${SPEEDTEST_TOKEN}" organization = "${SPEEDTEST_ORGANIZATION}" bucket = "${SPEEDTEST_BUCKET}" [[processors.converter]] [processors.converter.fields] string = [ "server_id", ] integer = [ "server_port", ] float = [ "download_bandwidth", "download_bytes", "download_elapsed", "upload_bandwidth", "upload_bytes", "upload_elapsed", "packetLoss", "ping_latency", "ping_jitter", ] [[inputs.exec]] interval = "15m" commands = [ "/usr/bin/speedtest --accept-license --accept-gdpr -f json", ] name_override="${SPEEDTEST_MEASUREMENT}" timeout = "60s" data_format = "json" json_time_format = "2006-01-02T15:04:05Z" json_time_key = "timestamp" tag_keys = [ "interface_externalIp", "interface_internalIp", "isp", "server_host" ] json_string_fields = [ "server_location", "server_name", "server_testcountry", "server_ip", "result_id", "result_url", ]
  20. from(bucket: mybucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == mymeasurement and ( r["_field"] == "download_bandwidth" or r["_field"] == "upload_bandwidth" or r["_field"] == "ping_latency" ) ) |> keep(columns: ["_time", "_field", "_value"]) |> aggregateWindow(every: v.windowPeriod, fn: valmean, createEmpty: false) |> map(fn: (r) => ({ r with _value: if (r._field == "download_bandwidth" or r._field == "upload_bandwidth") then r._value * 8.0 else r._value }) )
  21. from(bucket: mybucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == mymeasurement and ( r["_field"] == "download_bandwidth" or r["_field"] == "upload_bandwidth" or r["_field"] == "ping_latency" or r["_field"] == "server_name" or r["_field"] == "packetLoss" or r["_field"] == "server_location") ) |> pivot(rowKey:["_time"], columnKey:["_field"], valueColumn:"_value") |> group(columns: ["server_host"]) |> keep(columns: ["download_bandwidth", "upload_bandwidth", "ping_latency", "packetLoss", "server_location", "server_name", "serve r_host"]) |> reduce(identity: { server_name: "", server_location: "", valcount: 0.0, download_bandwidth: 0.0, upload_bandwidth: 0.0, ping_latency: 0.0, packet Loss: 0.0, }, fn: (r, accumulator) => ({ server_name: r.server_name, server_location: r.server_location, valcount: accumulator.valcount + 1.0, download_bandwidth: (r.download_bandwidth + accumulator.download_bandwidth * accumulator.valcount) / (accumulator.valcount + 1 .0), upload_bandwidth: (r.upload_bandwidth + accumulator.upload_bandwidth * accumulator.valcount) / (accumulator.valcount + 1.0), ping_latency: (r.ping_latency + accumulator.ping_latency * accumulator.valcount) / (accumulator.valcount + 1.0), packetLoss: (r.packetLoss + accumulator.packetLoss * accumulator.valcount) / (accumulator.valcount + 1.0), }) ) |> map(fn: (r) => ({ r with download_bandwidth: r.download_bandwidth * 8.0, upload_bandwidth: r.upload_bandwidth * 8.0 })) |> drop(columns: ["server_host"])
  22. from(bucket: mybucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == mymeasurement and ( r["_field"] == "results_id" or r["_field"] == "server_name" or r["_field"] == "server_location" ) ) |> keep(columns: ["_time", "_field", "_value"]) |> sort(columns: ["_time"], desc: true)
  23. import "experimental" import "date" option task = {name: "DailyMinMax", cron: "0 2 * * *"} today = () => (date.truncate(t: now(), unit: 1d)) yesterday = (boundary="start") => { timeValue = if boundary == "end" then experimental.subDuration(d: 1ns, from: today()) else experimental.subDuration(d: 24h, from: today()) return timeValue } from(bucket: mybucket) |> range(start: yesterday(), stop: yesterday(boundary: "end")) |> timeShift(duration: 1h, columns: ["_start", "_stop", "_time"]) |> filter(fn: (r) => (r["_measurement"] == mymeasurement and (r["_field"] == "download_bandwidth" or r["_field"] == "upload_bandwidth" or r["_field"] == "pingLatency"))) |> keep(columns: ["_time", "_field", "_value"]) |> reduce(identity: { valcount: 0.0, valmin: 0.0, valmax: 0.0, valmean: 0.0, }, fn: (r, accumulator) => ({ valcount: accumulator.valcount + 1.0, valmin: if accumulator.valcount == 0.0 then r._value else if r._value < accumulator.valmin then r._value else accumulator.valmin, valmax: if accumulator.valcount == 0.0 then r._value else if r._value > accumulator.valmax then r._value else accumulator.valmax, valmean: (r._value + accumulator.valmean * accumulator.valcount) / (accumulator.valcount + 1.0), })) |> map(fn: (r) => ({r with _time: yesterday(boundary: "end"), _measurement: "daily", data: r._field})) |> to( bucket: <mybucket>, org: <myorganization>, tagColumns: ["data"], fieldFn: (r) => ({ "valcount": r.valcount, "valmean": r.valmean, "valmin": r.valmin, "valmax": r.valmax, }))
  24.  Visual feedback  Periodical & on-demand Python + AstroPi(kind-of) info: astro-pi.org [...] self.query = 'import "math" from(bucket: "' + self.bucket + '") |> range(start: -1d) |> filter(fn: (r) => r["host"] == "%s" and r["_measurement"] == "' + self.measurement + '" and ( r["_field"] == "download_bandwidth" or r["_field"] == "upload_bandwidth" or r["_field"] == "ping_latency" ) ) |> keep(columns: ["_time", "_field", "_value"]) |> sort(columns: ["_time"], desc: false) |> last() |> map(fn: (r) => ({ r with _value: if (r._field == "download_bandwidth" or r._field == "upload_bandwidth") then math.round(x: (r._value * 8.0 / 10000.0)) / 100.0 else r._value }) ) |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")' db_client = InfluxDBClient(url=self.url, token=self.token, org=self.org) db_data = db_client.query_api().query_stream(query=(self.query_string % hostsname), org=self.org) [...]
  25.  Visual feedback  Periodical & on-demand Python + AstroPi(kind-of) info: astro-pi.org [...] self.query = 'import "math" from(bucket: "' + self.bucket + '") |> range(start: -1d) |> filter(fn: (r) => r["host"] == "%s" and r["_measurement"] == "' + self.measurement + '" and ( r["_field"] == "download_bandwidth" or r["_field"] == "upload_bandwidth" or r["_field"] == "ping_latency" ) ) |> keep(columns: ["_time", "_field", "_value"]) |> sort(columns: ["_time"], desc: false) |> last() |> map(fn: (r) => ({ r with _value: if (r._field == "download_bandwidth" or r._field == "upload_bandwidth") then math.round(x: (r._value * 8.0 / 10000.0)) / 100.0 else r._value }) ) |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")' db_client = InfluxDBClient(url=self.url, token=self.token, org=self.org) db_data = db_client.query_api().query_stream(query=(self.query_string % hostsname), org=self.org) [...]
  26. PROS  Something quick and easy  On demand  Custom telegram bot on Raspberry Pi CONS  No slack app & workspace  No live notifications (yet)  No service in case of network issues
  27. PROS  Something quick and easy  On demand  Custom telegram bot on Raspberry Pi CONS  No slack app & workspace  No live notifications (yet)  No service in case of network issues
  28.  Do not use random test server  Select optimal test server  Use good hardware at home
  29.  Integration with Smart Speakers (?)  Automatic daily reporting  Event notifications  …
  30. github.com/mirkodcomparetti/
  31. We look forward to bringing together our community of developers to learn, interact and share tips and use cases. 10-11 May 2021 Hands-On Flux Training 18-19 May 2021 Virtual Experience www.influxdays.com/emea-2021-virtual-experience/
Publicidad