Good points about the value of a “stupid” model, I would call it “simple”model. I like the emphasis on simulating or estimating the output of a trivial or random model as well as a simple model. There is nothing at all wrong with supplementing it or indeed building it on the basis of a tested set of hypotheses about the phenomenological connections between variables: the governing equations tend to be very simple to solve, a phenomenological model reduces the number of parameters tremendously, provides a model against which to test the residuals for noise and those few parameters have physical significance.
Before building a more complex model — of which the simple one may or may not be an order or asymptotic approximation — there are two things I would suggest: first, see whether your users are using the output as intended or understand the goals of what you are trying to do — this is a great place to pause and do user training; second, build a backtest framework to collect measurements and construct a business metric (beyond the problematic accuracy, precision, AUROC, Gini or Panini coeffcients) that measures how your bottom line depends on the model. Now this allows a functional test of a better model, as opposed to just some sparkly cool math reasons for improving the model. And it allows for what I call “Data Science Democracy”, anybody in the org can propose a model and if it is “reasonable” DS and Engg can help build it and backtest it — subject matter domain experts can contribute as well, as opposed to concentrating modeling decision-making in the hands of some priesthood.