Was requested to post this here.
Hey, I made an issue ticket for something I encountered: https://github.com/great-expectations/great_expectations/issues/1795.
It has to do with
DataContextConfig . Long story short, is there any way to dynamically generate a DataContext via the API (as shown in the docs https://docs.greatexpectations.io/en/latest/guides/how_to_guides/configuring_data_contexts/how_to_instantiate_a_data_context_on_an_emr_spark_cluster.html) in order to run a Checkpoint? It seems that right now only yaml configs are accepted.
BaseDataContext that you instantiate programmatically without a configuration file (as described in the how-to guide linked in the question) indeed does not have the capability to run Checkpoints.
Since Checkpoint is a thin wrapper around ValidationOperators, you can run a
ValidationOperator by invoking the context’s run_validation_operator method.
This folder in the repo has Jupyter notebooks that show step by step how to prepare the arguments to run a ValidationOperator: https://github.com/great-expectations/great_expectations/tree/develop/great_expectations/init_notebooks (there are versions for Pandas, Spark and SQLAlchemy)