SigOpt Research Engineer Michael McCourt explains the benefits and pitfalls of incorporating Prior Beliefs, a new feature, into your optimization loop, to achieve better performance in fewer observations.
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Warm Start Tuning and Prior Beliefs Webinar
1. Warm Start Tuning with Prior Beliefs
Thursday, June 4, 2020
Michael McCourt — Head of Research, mccourt@sigopt.com
2. SigOpt. Confidential.
Abstract
SigOpt provides an extensive set of advanced features,
which help you, the expert, save time while
increasing model performance via experimentation.
Today, we will continue this talk series by discussing
how to incorporate your prior beliefs into a SigOpt
experiment to inform the optimization about your expert
knowledge.
3. SigOpt. Confidential.
Agenda
1. Overview of SigOpt
2. How to convey prior beliefs about parameter
performance to SigOpt
3. Discussion on benefits and risks of prior beliefs
5. SigOpt. Confidential.
Can be deployed on-premises
Training
Data
AI, ML, DL,
Simulation Model
Model Evaluation
or Backtest
Testing
Data
New
Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale
with your needs
OPTIMIZATION ENGINE
Explore and exploit with a
variety of techniques
RESTAPI
Parameters or
Hyperparameters
Your data
and models
stay private
SigOpt delivers iterative, automated optimization
SigOpt fits
any stack
and helps
you build
the best
models
SigOpt: API-enabled parameter tuning
5
6. SigOpt is a Black Box Tool
Goal: SigOpt has been designed to prioritize users privacy -- no modeling information is required.
Opportunity: Customers can own their modeling process and still benefit from efficient model tuning.
Complication: What if customers want to provide modeling information?
Your models are your own
7. Users have Relevant Knowledge
Goal: SigOpt has been designed to prioritize users privacy -- no modeling information is required.
Opportunity: Customers can own their modeling process and still benefit from efficient model tuning.
Complication: What if customers want to provide modeling information?
Example: After many years developing XGBoost models,
a user may have developed intuition regarding the learning
rates which perform the best.
• Could this intuition accelerate SigOpt’s
optimization engine?
• How can it be conveyed to SigOpt?
Previous experience could inform the optimization
8. Prior Beliefs Structure
Our new Prior Beliefs feature allows customers to communicate structured information about parameters.
• This information takes the form of a probability density function.
• Parameters with higher prior belief value will receive more interest from the optimizer.
• Prior beliefs become less important as more observations are reported.
What does the user believe about parameters?
9. Prior Beliefs Structure
Our new Prior Beliefs feature allows customers to communicate structured information about parameters.
• This information takes the form of a probability density function.
• Parameters with higher prior belief value will receive more interest from the optimizer.
• Prior beliefs become less important as more observations are reported.
Example: Using high-performing
learning rates from previous
months, we can build prior beliefs.
What does the user believe about parameters?
Domain
Boundary
Possible
Prior
Belief
10. Designing Prior Beliefs for SigOpt
At present, SigOpt allows two types of prior beliefs.
• Normal
• mean
• scale
• Beta
• shape_a
• shape_b
How are prior beliefs structured?
11. Designing Prior Beliefs for SigOpt
At present, SigOpt allows two types of prior beliefs.
• Normal
• mean
• scale
• Beta
• shape_a
• shape_b
How are prior beliefs structured?
https://app.sigopt.com/docs/overview/parameter_priors
12. Benefits of Prior Beliefs
How will prior beliefs affect an experiment?
This prior beliefs feature gives users
new power.
Previously: Features existed to
define or change the problem
under consideration.
Prior beliefs: The problem remains
the same, but the optimization
engine changes.
Choices made when defining prior
beliefs can affect the optimization
process in both good and bad ways.
13. Benefits of Prior Beliefs
This prior beliefs feature gives users
new power.
Previously: Features existed to
define or change the problem
under consideration.
Prior beliefs: The problem remains
the same, but the optimization
engine changes.
Choices made when defining prior
beliefs can affect the optimization
process in both good and bad ways.
How will prior beliefs affect an experiment?
14. Benefits and Dangers of Prior Beliefs
This prior beliefs feature gives users
new power.
Previously: Features existed to
define or change the problem
under consideration.
Prior beliefs: The problem remains
the same, but the optimization
engine changes.
Choices made when defining prior
beliefs can affect the optimization
process in both good and bad ways.
How will prior beliefs affect an experiment?
15. Benefits and Dangers of Prior Beliefs
This prior beliefs feature gives users
new power.
Previously: Features existed to
define or change the problem
under consideration.
Prior beliefs: The problem remains
the same, but the optimization
engine changes.
Choices made when defining prior
beliefs can affect the optimization
process in both good and bad ways.
How will prior beliefs affect an experiment?
Can provide early acceleration
Eventually, the
observed data will
overpower any
prior beliefs
17. SigOpt. Confidential.
Questions?
Ask now, or feel free to email:
contact@sigopt.com
Slides will be available on slideshare … check your
email in the next few days.
18. SigOpt. Confidential.
Check out our
YouTube channel: Learn more about SigOpt
Read our research and product blog at
blog.sigopt.com.
See more videos at
https://sigopt.com/resources/videos
Try our solution:
Sign up at
sigopt.com/try-it
today.
Click Here
Upcoming webinars:
● Detecting COVID-19 Cases
with Deep Learning
Tuesday, June 9 at 10am PT / 1pm ET