skit_pipelines.utils package

Submodules

skit_pipelines.utils.cookies module

fetch_latest_cookies(cookie_save_path: str = '/tmp/kf_cookies.json') Dict[str, str][source]
load_cookies(COOKIES_PATH: str = '/tmp/kf_cookies.json') Dict[str, str][source]
save_cookies(cookies: Dict[str, str], COOKIES_PATH: str = '/tmp/kf_cookies.json') None[source]
simulate_selenium_connection(username, password) List[Dict[str, Any]][source]

skit_pipelines.utils.helpers module

convert_audiourl_to_filename(audiourl: str) str[source]

converts s3 http turn audio url into the actual filename with .wav extension.

get_unix_epoch_timestamp_from_s3_presigned_url(s3_http_turn_audio_url: str) int[source]

returns unix epoch timestamp present in URL which is assumed to be in seconds

re_presign_audio_url_if_required(s3_http_turn_audio_url: str) str[source]

check if the s3 http url for turn audio is valid or not - for downloading purposes only if not re-presign URLs using s3 client, for the next 7 days.

this is highly specific to US, since the bucket is private there.

skit_pipelines.utils.k8s module

get_pipeline_config_kfp(pipeline_name)[source]

skit_pipelines.utils.login module

kubeflow_login(force: bool = False) kfp._client.Client[source]

skit_pipelines.utils.normalize module

comma_sep_str(string: str, fn=None) list[source]
non_blank(c: str) bool[source]
strip(c: str) str[source]
to_camel_case(name: str) str[source]
to_snake_case(name: str) str[source]

skit_pipelines.utils.storage module

create_storage_path(storage_options: StorageOptions, path: str)[source]

skit_pipelines.utils.webhook module

send_webhook_request(url: str, data: Dict[str, Any])[source]

Send finished runs to the webhook url

validate_request_success(resp: Union[requests.models.Response, str])[source]

Validate the response from the webhook request

Module contents

class SlackBlockFactory(content)[source]

Bases: object

build() Dict[source]
code_block(content)[source]
ping(cc)[source]
text()[source]
text_block()[source]
create_dir_name(org_id: str, dir_type: str) str[source]
create_file_name(reference: str, file_type: str, ext='.csv') str[source]
filter_schema(schema: Dict[str, Any], filter_list: list) Dict[str, Any][source]
generate_generic_dir_name(name: str) str[source]