The real challenge with AutoML isn't just the pipeline creation, but ensuring that the models generalize well to unseen data and real-world scenarios. Your project here is neat and I appreciate that it includes support for preprocessing and model explanations as these are often overlooked.
I wonder though about the robustness and reliability of the models it generates. automl, in its essence, risks overfitting or underfitting if not carefully managed, and introduces a layer of indirection that can make debugging far more difficult. how does this project avoid those pitfalls?
It would be interesting to see how your tool performs in diverse datasets and how resilient the models are against drift over time. nevertheless, it's always good to see new takes on AutoML. keep probing the space and refining your approach
You can run Web App locally. What is more, you can adjust notebook's code for your needs. For example, you can set different validation strategies or evalutaion metrics or longer training times. The notebooks in the repo are good starting point for you to develop more advanced apps.
I wonder though about the robustness and reliability of the models it generates. automl, in its essence, risks overfitting or underfitting if not carefully managed, and introduces a layer of indirection that can make debugging far more difficult. how does this project avoid those pitfalls?
It would be interesting to see how your tool performs in diverse datasets and how resilient the models are against drift over time. nevertheless, it's always good to see new takes on AutoML. keep probing the space and refining your approach