Skit-pipelines

Reusable workflows for ml teams at skit.ai. Built using kubeflow components.

[Contribution guide]

Components

A component does one thing really well. As an example, if you want to download a dataset and train a model:

  1. You would query a database.

  2. Save the results as a file.

  3. Prepare train/test datasets for training.

  4. Run a program to train the model.

  5. Save the model once training is complete.

  6. Evaluate the model on the test set to benchmark performance.

  7. Run 1 - 6 till results are favourable.

  8. Persist the best model on the cloud.

  9. Persist the best results on the cloud.

Each step here is a component. As long as components ensure single responsibility we can build complex pipelines conveniently.

Attention

If a component trains a model after performing a 70-30 split on a given dataset. It would be very difficult to train if we have a dataset that should be used entirely for training. The component will helplessly reduce 30% of the data always.

Pipelines

Pipelines are complex ML workflows that are required regularly like: training a model, sampling data, getting data annotated, producing metrics, etc.

Here’s a list of official pipelines, within these docs we share snippets for slack-bot invocations:

Pipelines

1 | Random sample calls

2 | Download tagged dataset

3 | Download tagged entity dataset

4 | Publish Compliance breaches

5 | Transcribe Dataset

6 | Retrain SLU

7 | Random sample and tag turns and calls

8 | Generate sample conversations

9 | Generate and upload conversations

10 | Invalidate situations in DB for LLM

Project strucuture

Understand the directory strucuture.

.
├── build
│   └── fetch_calls_pipeline.yaml
├── ... (skipping other files)
├── skit_pipelines
│   ├── components
│      ├── fetch_calls.py
│      ├── __init__.py
│      └── upload2s3.py
│   ├── constants.py
│   ├── __init__.py
│   ├── pipelines
│      ├── fetch_calls_pipeline.py
│      └── __init__.py
│   └── utils
│       └── __init__.py
└── tests
  • We have components module, each file corresponds to exactly one component.

  • We have pipelines module, each file corresponds to exactly one pipeline.

  • Reusable functions should go to utils.

  • We have constants.py file that contains constants to prevent typos and assist with code completion.

  • build houses our pipeline yamls. These will get more important in a later section.

It is necessary to understand the anatomy of a kubeflow component and pipeline before contributing to this project.

Making new pipelines

Once a new pipeline and its pre-requisite components are ready.

  1. Add an entry to the CHANGELOG.md.

  2. Create a new tag with updated semver and push, our github actions take care of pushing the image to our private ECR.

  3. Run make all. This will rebuild all the pipeline yamls. This will create a secrets dir. Doesn’t work if you don’t have s3 credentials.

  4. Run source secrets/env.sh You may not have this if you aren’t part of skit.ai.

  5. Upload the yamls to kubeflow ui or use it via the sdk.

Pre-requisites

  • This project is based on python 3.10. You would require an environment setup for the same. Using miniconda is recommended.

  • make mac

  • poetry

Local development

  • Source secrets.

    dvc pull && source secrets/env.sh
    
  • Run

    uvicorn skit_pipelines.api.endpoints:app \
    --proxy-headers --host 0.0.0.0 \
    --port 9991 \
    --workers 1 \
    --reload
    

Responses

Endpoint responses

{
   "status":"ok",
   "response":{
      "message":"Pipeline run created successfully.",
      "name":"train-voicebot-xlmr",
      "run_id":"e33879a1-xxxxx",
      "run_url":"https://kubeflow.skit.ai/pipeline/?ns=..."
   }
}

Webhook responses

Success

{
   "status": "ok",
   "response": {
      "message": "Run completed successfully.",
      "run_id": "662b9909-d251-45f8-a8xxxxx",
      "run_url": "https://kubeflow.skit.ai/pipeline/?ns=...",
      "file_path": "/tmp/outputs/Output/data",
      "s3_path": "<artifact s3_path tar file>",
      "webhook": true
   }
}

Error

{
"status": "error",
   "response": {
      "message": "Run failed.",
      "run_id": "662b9909-d251-45f8xxxxxxxx",
      "run_url": "https://kubeflow.skit.ai/pipeline/?ns=...",
      "file_path": null,
      "s3_path": null,
      "webhook": true
   }
}

Pending

{
"status": "pending",
   "response": {
      "message": "Run in progress.",
      "run_id": "662b9909-d251-45f8-axxxxxxxxx",
      "run_url": "https://kubeflow.skit.ai/pipeline/?ns=...",
      "file_path": null,
      "s3_path": null,
      "webhook": true
   }
}

Indices and tables