We talked briefly about attribution errors earlier. This is if you are actually using downstream behavior from a change, and that gets into a gray area.
You need to understand how you're actually counting those conversions as a function of distance from the thing that you changed and agree with your business stakeholders upfront as to how you're going to measure those effects. You also need to be aware of if you're running multiple experiments at once; will they conflict with one another? Is there a page flow where someone might actually encounter two different experiments within the same session?
If so, that's going to be a problem and you have to apply your judgment as to whether these changes actually could interfere with each other in some meaningful way and affect the customers' behavior in some meaningful way. Again, you need to take these results with a grain of salt. There are a lot of things that can skew results and you need to be aware of them. Just be aware of them and make sure your business owners are also aware of the limitations of A/B tests and all will be okay.
Also, if you're not in a position where you can actually devote a very long amount of time to an experiment, you need to take those results with a grain of salt and ideally retest them later on during a different time period.