14. Legacy tech stack creates significant information losses which inhibit
effective operational value creation and high end data science
Analytics
Data Science
Optimization
Software
Data
Historian
Comms /
SCADA
Hardware
/ Math
Information Loss Sources & Value
Creation Opportunities
1. Pump to surface
2. Surface to PLC
3. PLC to PI via SCADA
4. Historian data quality issues
5. PLC to optimization software
6. Software to end user
7. Well time/data overload human variation
8. Well to field time/data overload human variation
9. Human back to PLC
10. POC to Pumpjack
PLC
15. SecurityHigh CostSystems IntegrationArchitecture
SCADA deployments are costly and complex
50+ year-old SCADA architecture prevents implementation of modern data science and analytics
● SCADA architecture limits
real-time analytics & data
science
● Low-res polling approach
misses insightful events
● SCADA designed (1960s)
when storage & compute
expensive but bandwidth
unlimited (wired)
● Today: storage & compute
cheap, while bandwidth is
major limitation
● Integration of
components challenging:
hardware + comm’s +
historian + software
● Complex implementation
in field + office means
that integration is never
seamless
● Single screen solution
difficult to achieve
● High initial capex
● Maintenance costs
● High replacement costs
● Total cost of ownership
limits business case to
new/high-value wells
● Fundamentally
unsecured in the field
● Unsecured local ports
● Sniffable Clear Text
transmission
● Storage not encrypted
● The air gap myth
16. E&Ps could benefit enormously from advancements in technology
Opportunity cost of SCADA and its limitations will continue to compound
Artificial Intelligence and Machine Learning
Artificial Intelligence (AI) has increased safety and quality
while increasing revenue and reducing costs in numerous
consumer applications
Internet of Things
Internet of Things (IoT) already advancing from simple edge
data collection to intelligent automation with distributed
automation across numerous segments
Source: IoT Analytics, Q3/2016
17. Machine Learning and Deep Learning
Machine Learning - Anomaly Detection Deep Learning - Neural Networks
Machine learning uses algorithms to parse data, learn
from that data, and make informed decisions based
on what it has learned
Deep learning structures algorithms in layers to
create an “artificial neural network” that can learn and
make intelligent decisions on its own
18. Key criteria for high quality data science
01 High-resolution, cycle-level data
05
Continuous feed of new data to
regularly validate models
02
Domain expertise to inform feature
engineering
03 Data lake
04 Marked data
19. … years of data gathered from...
41M
100M
10+ … oil wells.
… dynamometer cards generated with expert classification!
5 ms … sampling rate of high resolution data.
… operating hours of data.
800+
19
High resolution data is a foundational asset for AI development
Massive Proprietary Data Lake
20. Economic, actionable data science for industrial assets is only
enabled with deep data from edge devices
Time
SCADA Fixed Polling Frequency
Motor cycle
Ambyint Pattern
Recognition
Ambyint
Time
Load
SCADA
Load
● Regular polling frequency of SCADA
misses insightful events
● Uneconomic to ratchet up polling without
large Capex spend
● Ambyint High-Resolution Adaptive
Controllers (HRACs) capture rich, motor
level data
● HRACs leverage edge compute produce
edge analytics for sensor information
High resolution + pattern recognition + event-based enables quality data science
21. Cloud is a requirement for executing AI and ML
AI as a Service (AIaaS)
Infrastructure as a Service (IaaS)
● Driven by deep consumer/social data
● Pre-trained models
○ Speech recognition
○ Translation
○ Image content identification
○ Computer vision
○ NO INDUSTRIAL MODELS
● Elastic compute & storage
○ Ability to store large quantities of structure and
unstructured data
○ Custom models using choice of AI frameworks
● Analysis & Visualization Tools
○ Ability to explore and understand data sets via
visualizations
Cannot overcome fundamental limitations of data itself
Ambyint + AWS
22. End-to-End IOT Solution enables autonomy while conventional
implementation provides advisory capabilities
Time
Workload
Time
Workload
Automation stays static and has no
capacity to learn and improve its capability
True AI initiatives are enabled by domain
experts. They train the system and the AI
actively learns with time and data
23. <7 month payback deploying autonomy to 20 WTX stripper wells
Autonomous Set Point Management (ASPM) Machine Learning Economic on 3 BOPD Conventional Wells
Value through ASPM overpumping mitigation
Value through remote visibility & control
● Pilot wells dialed in went from 15% to 65%
● 300% reduction in cycling with no loss in
production
○ 65% reduction in strokes
○ 54% in pump efficiency
● Enabled pump by exception
○ Operator visibility and BU accountability to
wells down outside of timer settings
○ Alerted to mechanical issues such as wells
without belts or surface failures
Field-Wide Value: $320K power savings, $700K workover savings, $1MM total savings
(Excludes: uptime, uplift, wellsite visit, chemical/maintenance improvements)
Notes: Power Cost: 4.5¢/KWh, Workover Cost: $25,000/well, 10 Million Strokes to Failure, 1 well = 1.4mm strokes saved, field expansion of 250 wells
Run Life Increased by 139%
Power Consumption Reduced by 58%
Baseline
Post ASPM
24. ASPM delivers $78MM+ revenue increase when extrapolated field wide
Ambyint machine learning shows value in automated well optimization for North Dakota Bakken rod pump wells
Value through overpumping mitigation
Value through underpumping mitigation
Notes: workover prevention estimate assumes 1yr runlife prior to Ambyint, 3MM strokes to failure, linear reduction in failure rate with stroke reduction (11%); revenue increase estimate
assumes 450 rod pumps with 60% of field overpumping, 20% dialed in, 20% underpumping getting 5% production increase; workover cost savings estimate assumes 450 rod pumps with
$50k average failure cost and failure frequency of 1.0 applying 10% stroke reduction.
● 11% reduction in strokes / electricity
● 59+ workovers prevented annually*
○ 290k annual strokes saved/well
○ 175MM annual strokes saved w/ full field
deployment
● 33% increase in BOPD on underpumping wells
○ Increase of 135 BOPD
○ Equivalent to 6% increase across all pilot wells
Field wide value: $78MM increase in annual revenue, $3MM reduction in workover costs, $350k
annual electricity savings from reduced strokes
Before Ambyint
After Ambyint
Dialed In
Dialed In
25. Ambyint & AWS Partnership
● Use of Amazon Elastic Container Service (ECS) to help
provide a highly scalable and performant environment to
handle large amounts of well data traffic.
● Amazon Elastic MapReduce (EMR) to help define
classifications and build models which serve as the
foundation for ASPM.
● Integration with AWS Lambda helps provision
functionality that can be continuously and easily
improved upon without the need for complex server
deployments.
● Our services are able to make decisions which will be
brought forward to our users effectively and securely by
using CloudFront CDN and a static S3 deployment.