I will present the recent work of my research group on the design, analysis, and practical performance of stochastic-gradient-based algorithms for solving constrained continuous optimization problems, such as those that arise in constrained machine learning model training. I will summarize our work on stochastic interior-point and sequential-quadratic-programming algorithms, which we have shown to have (probabilistic and/or almost-sure) convergence guarantees under generally loose assumptions on the problem functions and stochastic errors in the objective gradient estimates. The results of numerical experiments on physics-informed and fair learning problems demonstrates the practical performance of our proposed techniques.