We’ll take you through the most cutting-edge scenarios our team has been working on over the last year, including applying machine learning to geospatial data, real-world use cases for immersive environments, photogrammetry, and more.
2. Today’s Spiritual
Experiences
1. See the world through new eyes.
2. Now try to understand it.
3. Worry less over your bills.
4. You can do anything! Including using R.
5. Augmented Reality in a new light.
6. Immerse in virtual environments.
7. Soar high above your problems.
Relax and let FME
guide you
3. Computer Vision
A field of Machine Learning with the goal of helping computers
“see” and “understand” what’s in an image.
4. Last year, you asked if it was
possible to use Computer Vision
with geospatial data.
We went home and tried it.
5. How does CV work in FME?
Step One: Train the Algorithm
Input:
A lot of data samples
Use FME to connect to a
CV tool and train it to
recognize your object
Output: XML file
containing the knowledge
“Train” the algorithm once, then use the resulting XML file to perform computer vision.
6. How does CV work in FME?
Step Two: Perform CV on your Image(s)
Input: The image(s) you
want to perform CV on
Use FME to pre-process,
connect to a CV tool, and
post-process
Output:
Rectangles around found
objects
“Train” the algorithm once, then use the resulting XML file to perform computer vision.
7. CV on Cars
Car detection on aerial imagery using FME.
Workflow:
● Data collection and preprocessing.
● Connect to OpenCV using the
RasterObjectDetector.
● Clipping, rotating, filtering, and
other post-processing.
8. CV on Roofs
Roof detection on aerial imagery using
FME in the same way.
E.g. a county needs to detect what’s
new whenever they get new imagery.
9. CV on Oblique Images
Car detection on oblique images using
FME in the same way.
10. CV Tools in FME
Many ways to connect to Computer Vision algorithms!
● OpenCV (via RasterObjectDetector transformers)
● Google Vision. NEW!
● Azure Computer Vision. NEW!
● Amazon Rekognition. NEW!
● PicterraConnector. NEW and Geospatial!
* Paid services. OpenCV is FOSS.
11. Example: Face detection and augmented
objects with Google Vision
Do you want to try the mask on?
Do you want to make your own filter?
12. RekognitionConnector workflow example:
1. Use spatial filtering to submit only
necessary photos or video frames
(this is a paid service!)
2. Detect objects
3. Filter road signs
4. Clip road signs from photos
5. Detect texts on road sign clips
6. Stay in control over bills with FME
23. Further Reading
● Blog: FME Does Computer Vision (only talks about the OpenCV
method – the RasterObjectDetector transformer family)
● Computer Vision Webinar - “How to Improve Computer Vision
with Geospatial Tools”
24. Natural language processing is a subfield of linguistics, computer science,
information engineering, and artificial intelligence concerned with how to
program computers to process and analyze large amounts
of natural language data
Natural
Language
Processing
25. NLP Tools in FME
The same providers who offer CV also have Natural
Language Processing, i.e. helping computers analyze
text data written in natural language.
● NLTK (via NLPTrainer and NLPClassifier)
● Google Natural Language. NEW!
● Azure Text Analytics. NEW!
● Amazon Comprehend. NEW!
* Paid services. NLTK is FOSS.
26. Example: Key phrases and sentiment
analysis of the stories by O. Henry
All three Comprehend modes Mixed emotions
29. How to Process Your PDF Bills
● In your email inbox, set up a rule to forward your bill
emails to FME Server.
● In FME Server, build an Automation to:
○ Trigger when an email is received.
○ Extract the PDF attachment.
○ Run a workspace that uses
the PDF Reader to get
the amount from the PDF.
○ Save the amount to a database.
○ Send back a notification report.
31. Further Reading
● PDF processing:
○ [Blog] Extracting Geospatial Data from PDFs
○ [Webinar + Demos] Reading PDFs with FME
■ Includes demos for reading spatial data & maps
○ [Tutorial] Getting Started with PDF Reading
● [Blog] Synchronizing Accounting Data using FME
33. BC Cancer and Safe Initiative
● BC Cancer has too much data to manually analyze.
● We want to reduce their manual processes by producing
FME Workspaces to help them analyze this data.
● By using the RCaller transformer, we can better integrate
statistics into FME.
38. How Does AR Work in FME?
Your dataset (3D model,
raster image, polygons,
lines, etc.)
Use FME to transform and
convert it to .fmear
View your dataset in
augmented reality using
the FME AR app
39. FME Data Spa
What’s this theme all about?
Use Cases for FME AR
● See all infrastructure (including
underground, behind walls) .
● See locations and paths of wildfires or other
natural disasters.
● Tourism – landmark locations and labels.
● Visualize CAD designs with parts and labels.
● Education – add labels and explanations.
● “Where is My Team?” app.
● See proposed buildings or past ones.
● See 3D landscapes for mining, civil
engineering, or earth sciences.
42. Resources
● Get the FMEAR app for iOS and Android
● [Tutorial] Getting Started with Augmented Reality. Learn how to
use the FME AR app and create .fmear datasets.
● [Blog] Making your own AR models
● [Blog] Augmented Reality for Indoor Wayfinding
43. What do you think about AR?
Is it still a toy or do you think it can be used for
serious things?
Do you want to try it?
Tell us more in Q&A or email us later!
44. Gaming Engines
Explore your data in a real-time virtual environment! Practical use cases for
bringing data into Unreal Engine.
45. Scenario: Office floor plan. Bringing a PNG into Unreal Engine
(how about a blooper?)
2D Floor Plan to Unreal Engine
49. Cesium and AR outputs
STILL NEED TO MAKE AR SCREENSHOT
50. What Other Scenarios
Can You Envision?
● See your community in the past, present, and future.
● Explore proposed and alternative buildings or floor plans.
● See a CAD model of machinery in its true dimensions.
● Thoughts? Let us know!
[Abstract] Get inspired by a few of the most cutting-edge scenarios our team has been working on over the last year, including applying machine learning to geospatial data, real-world use cases for immersive environments, photogrammetry, and more. Relax and let FME guide you on this enlightening journey of new data transformation possibilities!
Computer Vision (or “CV”) is a field of Machine Learning with the goal of helping computers “see” and understand what’s in an image. Let’s look at what we’ve done with CV in the last year.
You train it once and then you get back an XML file containing the knowledge. Then you can use the XML file to do CV.
You train it once and then you get back an XML file containing the knowledge. Then you can use the XML file to do CV.
Real use case: counting cars in a parking lot.
Real/potential use cases:
People are currently testing whether this can be used to find landmines. Huge safety win – can send drones instead of people.
County use case - to update their basemaps by identifying changes.
Other places do roof counting for taxation purposes.
Other use cases - counting/finding objects like garbage bins.
Detecting vegetation is also an area of interest. Multispectral imagery helps here, since plants give off infrared.
Easily train different imagery sources – aerial, drones, ordinary photos. Doesn’t need to be a perfectly aerial shot.
Before we get into how it works, here are the CV tools available in FME. Some of these are available via FME Hub and some are regular FME transformers. The last 4 are new in FME 2020.
https://www.gislounge.com/when-ai-goes-wrong-in-spatial-reasoning/
Paid services - the vendor charges, not us. Using these FME transformers are of course free. OpenCV is the free option because it’s a FOSS library.
Every team can talk to Dmitri to get their faces harrypotterized
Google Vision can detect facial features and then we place objects on the image
Dmitri’s video blog for spatial filtering example
Dmitri’s video blog for spatial filtering example
Related to CV: you can use FME to count moving objects - like cars, birds.
This picture is slit-scan photography.
FME was used to create this raster containing tens of thousands of pixels.
The X axis represents time. ~5 frames per second.
For more info, see the CV webinar: https://www.safe.com/webinars/how-to-improve-computer-vision-with-geospatial-tools/
Segway: now that you’ve used Google/Microsoft/Amazon’s paid service to perform CV, you’ve got a bunch of bills to deal with...
Last year we demoed extracting geospatial data & map info from PDFs. This demo focuses on text-based PDFs.
Workspaces will be available for people to download and try themselves.
Input bills: PDF, emails, hosted (use httpcaller)
Workspace for each bill type
Erin and Dmitri spoke about creating a video for this.
There is a demo showing how it all works. It requires some setup ahead of time, but then it just works and looks quite impressive.
WT2020 cloud machine catches emails sent to billcatcher@fmewt2020-safe-software.fmecloud.com with PDF attachments and subjects “Natural Gas Bill”, “Hydro Bill”, “Phone Bill”. So these emails should be sent from some email to the email of the presenter (who previously set up forwarding of such emails to the address above). Then the automation processes the PDFs, extracts senders and amounts to pay, adds those records to Google Sheet and calculates a new total. Then it reports back to the presenter via email a success or failure.
Could mention that our new Safe office is in the health tech district. We’re across from BC Cancer and started an initiative with them and SFU NeuroTech.
We originally started working with BC Cancer to see if we could analyze 3D data using FME. You might remember from last year’s World Tour. However, upon a demo of FME Dr. Krauze (the clinician we were working with) saw potential to use FME with her tabular data.
BC Cancer has large datasets that no one is looking at because of the large amount of work needed to analyze the data.
We are working with BC Cancer to create a “data pipeline” that we feed data into FME and produce charts
Large amounts of data can be read in, the attributes can be managed in one workflow and charts can be produced. From left to right, we have a histogram (how many people of each age are in the study-- please note this is fake data). In the middle we have a box plot, with the various risk groups (real data- I can’t remember if the groups are real). Then, on the right we have a kaplan meier chart. This is analyzes survival probability over time.
Take a DEM or other numeric raster (or any raster for that matter) and turn it into a 3D image or video.
rayshader (https://www.rayshader.com/) is an open source package for producing 2D and 3D data visualizations in R. rayshader uses elevation data in a base R matrix and a combination of raytracing, spherical texture mapping, overlays, and ambient occlusion to generate beautiful topographic 2D and 3D maps. In addition to maps, rayshader also allows the user to translate ggplot2 objects into beautiful 3D data visualizations.
Video - https://youtu.be/35b4aMy9f-w
Or https://www.dropbox.com/s/n2cy5dl5xk4t14n/rayshaderVideo.mp4?dl=0
Conclude by mentioning that Safe aims to look more closely at R integration possibilities in the near future.
If you need the video as a file instead of the Youtube, here is the link:https://www.dropbox.com/s/7nj8gxubklwbmz5/Landmarks0.mp4?dl=0
For Live demo (as of January 2020, only for iPhones):Add a few landmarks for your WT stop - something evenly spread around the venue. The workspace searches within 100 km radius
https://docs.google.com/spreadsheets/d/1wUu0m659an9tufjoL33e3pmhCfy5dzce9dxYhIGjUWI/edit#gid=414062125
Server app:https://fmewt2020-safe-software.fmecloud.com/fmeserver/apps/LandmarksDirect link:https://fmewt2020-safe-software.fmecloud.com/fmeserver/#/workspaces/run/AR/Landmarks.fmw/fmedatastreamingThrough FME Data Express, use your current location or adjust to a place you know has landmarks around (Safe office), set the label width (1 is good for short names, longer names require 2, maybe more), and choose units (km or mi), fmear will be streamed back, open in FME AR app.
This is a real scenario.
Important note to presenters: DO NOT talk about the social/historical implications of this project. Just talk about the fact that FME is being used to bring data representing a historic site into a game environment.
Data types: CAD, point cloud, drone, 3D, etc.
Cesium & AR images
https://safe-scenarios.s3-us-west-2.amazonaws.com/Cesium/amache/amache.html
Let us know what you’d like to do with Unreal. We can guide development in this format based on what our customers our interested in.
This is about the need to combine photos to make continuous orthoimagery and 3D models
(Transition from the historic site project where objects like water tower were created in Pix4D and not FME… here we’re demonstrating Pix4D integration with FME)
This game is a fun visualization of an FME Server deployment. It constantly polls FME Server using the REST API to get the current number of engines and jobs that are running.
[0:00] Each cube represents an FME Engine.
The balls are jobs that are queued, running, or completed.
When the job completes, it explodes and launches across the room into a pit that collects all of the completed jobs. Why do completed jobs explode? Because it’s awesome!
This particular FME Server is an internal one we have that runs a lot of workspaces on a schedule.
[0:44]The red ball in the completed pit is a failed job. Successful ones are green. Queued are yellow. Running is blue.
Notice you can pick up and interact with the jobs/balls.
[1:11] We also tried rendering all the workspaces in the repository as flat brown polygons. Every polygon has physics properties attached to it, so there’s a lot going on here in terms of rendering.
[1:42] Unreal has good VR support so you can actually put on a headset and go right inside FME Server. (Word of warning: the explosions are even louder and scarier in VR.)
As a backup consider saving the source video on your machine:
https://www.dropbox.com/s/kp22g0bug3ycgsp/FMEServer-UnrealEngine.mp4?dl=0, or
https://drive.google.com/file/d/1TtrcfDN7AcKrNfO8h7YcW-CSmH8CpEaR/view?usp=sharing