Comments (7)
from ax.
Hi, thanks for reaching out.
To your first question: is it possible there is noise in the system you're trying to optimize? Or could there be some nonstationarity in your readings (ie the output changes over time in a way that is not related to your parameterization)? Both of these make it more difficult for Bayesian optimization to perform well. Our methods do their best to estimate noise internally and optimize for the true value, but sometimes there is simply too much noise for BO. You can use the function interact_cross_validation_plotly to get a plot that should show how well the Ax's model is performing on your data.
To the second question could you elaborate what you mean by changing the flow and "carry over"?
from ax.
@mpolson64 thank you for your replying
For the second question, changing the flow is described as below:
firstly, i use Ax's bayesian optimization to produce a set of parameters in a A/B testing with a group of users
secondly, i set the parameters in a A/B testing with another group of users
carry over means :
firstly i set parameters in A/B testing with a group of users, after a while i change the parameters in A/B testing with the same group of users, carry over means the influence on users of the first parameters will last for a while even i change the parameters;
from ax.
The same problem we encountered,We are using AB experiments for hyperparameter tuning, where there are 3 experimental groups, 3 optimization goals, and 1 constraint. Specific information can be found in the JSON file below. Currently, we have encountered the following issues: in the 15th and 16th rounds, we found some promising hyperparameter combinations, for example {"read_quality_factor":1, "duration_factor":0.5, "pos_interaction_factor":0.2, "score_read_factor":1}, with target effects of {'a':+0.98%, 'b':+0.68%, 'c':+1.49%, 'd':+0.67%}, where the p-value ranges from 0.005 to 0.08. However, when we conduct large-scale AB experiments with these promising hyperparameter combinations, we often encounter situations where the effects cannot be replicated. We would like to inquire about the following two questions:
1、Does Facebook's hyperparameter tuning AB experiment encounter similar issues? We have already used CUPED to reduce the variance of the experimental data for each round . What optimization suggestions do you have for similar issues?
2、For each experimental group, the same batch of users is used every time when deploying hyperparameters. We suspect that the inability to replicate the experimental effects may be related to carry over. Does Facebook's hyperparameter tuning AB experiment reshuffle the experimental users when deploying hyperparameters?"
snapshot.json
from ax.
Hi all, I would definitely recommend “reshuffling” (or simply creating a new experiment) for each batch. Otherwise you have carryover effects. Variance reduction is always a good idea. We use regression adjustment using pre-treatment covariants along the lines of CUPED for most AB tests. Second, 3 arms per batch is probably inefficient / problematic. Typically we use at least 8, but sometimes as many as 64. For 3 parameters though maybe 5 could be OK. The GP borrows strength across conditions so you can make the allocations smaller than you normally would if you wanted to have an appropriately powered AB test. Note that AB tests cause some non stationary, in that treatment effects change over time. I recommend making sure each batch runs for enough time to “settle down”, and using the same number of days per batch. There is more sophisticated adjustment procedure that we use at Meta. if you send me an email (which you can find at http://eytan.GitHub.io) I can send you a preprint that explains the considerations and procedures in more detail. Best, E
…
On Fri, Apr 12, 2024 at 5:32 AM maor096 @.> wrote: The same problem we encountered,We are using AB experiments for hyperparameter tuning, where there are 3 experimental groups, 3 optimization goals, and 1 constraint. Specific information can be found in the JSON file below. Currently, we have encountered the following issues: in the 15th and 16th rounds, we found some promising hyperparameter combinations, for example {"read_quality_factor":1, "duration_factor":0.5, "pos_interaction_factor":0.2, "score_read_factor":1}, with target effects of {'a':+0.98%, 'b':+0.68%, 'c':+1.49%, 'd':+0.67%}, where the p-value ranges from 0.005 to 0.08. However, when we conduct large-scale AB experiments with these promising hyperparameter combinations, we often encounter situations where the effects cannot be replicated. We would like to inquire about the following two questions: 1、Does Facebook's hyperparameter tuning AB experiment encounter similar issues? We have already used CUPED to reduce the variance of the experimental data for each round . What optimization suggestions do you have for similar issues? 2、For each experimental group, the same batch of users is used every time when deploying hyperparameters. We suspect that the inability to replicate the experimental effects may be related to carry over. Does Facebook's hyperparameter tuning AB experiment reshuffle the experimental users when deploying hyperparameters?" — Reply to this email directly, view it on GitHub <#2342 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAW34KJT5PCRCMW7JGL6NTY46S3ZAVCNFSM6AAAAABF7TWSPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJRGM4TOMJXHE . You are receiving this because you are subscribed to this thread.Message ID: @.>
Thank you for your suggestion
from ax.
@eytan hi eytan
I have sent an email to you based on the information at http://eytan.GitHub.io, looking forward to your reply. Thank you very much.
from ax.
hi,@eytan,i also want the preprint that explains the considerations and procedures that you use in Meta,can you send me by email.
My email address is [email protected].
I am really looking forward to your reply
from ax.
Related Issues (20)
- Ax is not not starting as many workers as I'd like to; sometimes, get_next_trials returns 0 new trials HOT 5
- Evaluating custom candidates HOT 2
- Input Feature Selection - Does the relevant code exist? HOT 6
- [Feature Request] support constraints on `ChoiceParameters` HOT 4
- Extending Models.THOMPSON with an extra parameter HOT 1
- Space characters in the objective name AND specifying a threshold leads to an error message: "AssertionError: Outcome constraint should be of form `metric_name >= x" HOT 2
- Pandas deprecation warning when deserializing AxClient JSON HOT 2
- AX seems to get stuck with Ray
- `StandardizeY` transform requires non-empty data." when using SAASBO
- Plotting outside of a notebook HOT 1
- Setting search space step size in Ax Service API HOT 10
- Problem when Sobol falls back to HitAndRunPolytopeSampler HOT 5
- Arms from previous batch keep appearing in new batches HOT 5
- EHVI & NEHVI break with more than 7 objectives HOT 4
- Multi-objective experiments generate duplicated data HOT 5
- Question: Transforming objective when passing `best_f` to `ProbabilityOfImprovement`, etc. HOT 3
- [Question] Multiobjective optimization where one target has no optimization direction (but required range)
- How to expose the default acquisition function being used by AxClient() HOT 1
- Fix core docs related to linear constraints HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ax.