When doing DOEs your models are usually being made as part of one stage in a campaign. Therefore the decision as to what to do next will be made based on the goals of the stage in question.
In general the first question is whether you’ve achieved the goals of that stage: did you get enough information to make a decision, or do you need to do another experiment to try to get more before you can decide what to do? For example if you went outside the acceptable range of one or more factors and ended up with a lot of zeroes in the response it’s quite possible you may not have learned anything.
If you did get some information the decision is then what stage to move onto next, and how to choose your next design (including factor levels).
Typically if you’re trying to optimize some system the goal at the early stages is to get close to a decent level of your response, then get to a point when you can run an optimization design to build a good enough model to predict where your optimum will be.
If you’re starting out and have low responses the usual question is how to set factors to improve the response. So typically the next step is to predict where the response is better and then explore the region around that point.
If your response is good and your model indicates some of your factors aren’t making a big difference you’d typically fix their values and do a more dense design - possibly an optimization if there are few enough and they’re all or mostly continuous.
If you’ve done an optimization design you would typically predict the location of an optimum then design a small experiment to test the prediction, along with the effect of small changes to the factors around that point.
What if there’s no model?
In the early stages of exploring an unknown experimental system it’s very common to find that you can’t actually build any model at all - no terms come out significant, perhaps except the intercept.
This can happen because you’ve overstepped the boundaries of what’s safe and have a lot of zeroes in the response, or it could just be that there’s no consistent pattern to the results, which usually means it’s a combination of some factor ranges being too high and others being too low.
Situations like this are often scary to those new to DOE since the usual presentation of the method - based as it is on building linear models - talks about experiments as if there’s always a good model and you just need to figure out what it is and what it means. The reality is often very different: experimentation is only done when you don’t know something, and you rarely get everything right first time.
The bad news is that you will need to do at least one more experiment to get where you’re trying to go. The good news is that you’ve always still learned something: you know what doesn’t work. And typically at least a few points you’ve investigated show promise.
You usually have two options for the next experiment here:
Go back to scoping. A space-filling design is good here if you have the run budget - it uses a lot of levels of your factors and means that you’ll start to see where the edges are in your space.
Choose the best point of interest and start exploring the region near that - perhaps being somewhat cautious about how wide your factor ranges are around that point.
Option 1 is more systematic, but requires quite a lot of runs to be really effective. It’s really the only option if you haven’t found any points which give you a meaningful response.
Option 2 requires fewer runs, but is only really possible if you’ve found a point with a decent response. There’s still potentially a residual uncertainty over how far around that point it’s safe to explore, however you may be able to figure out which settings are really bad by looking at failed runs. What do they have in common? You can often reverse the usual modelling practice and try to predict what doesn’t work, giving you insight into what to avoid.