Use Kubeflow Volumes
Volumes in Kubeflow are managed as Kubernetes Persistent Volume Claims (PVCs). They provide persistent storage for your data, workspaces, and models, independent of the lifecycle of your Notebook servers or other workloads.
TOC
Create a VolumeManage VolumesUse a Volume in NotebooksAccess the Endpoints UIDeploy a New ModelMonitor and TestCreate a Volume
- Access the Dashboard: Click Volumes in the Kubeflow central dashboard sidebar.
- New Volume: Click New Volume.
- Configure:
- Name: Enter a unique name for the volume.
- Storage Class: Select the Storage Class (e.g., topolvm, nfs) if multiple are available.
- Size: Specify the size of the volume in
Gi(e.g.,10). - Access Mode:
- ReadWriteOnce (RWO): Mounted by a single node (Common for block storage).
- ReadWriteMany (RWX): Mounted by many nodes (Common for NFS/File storage).
- Create: Click Create. The volume status will change to Bound once provisioned.
Manage Volumes
- Open PVC Viewer: Click the "Folder" icon next to a volume to create a temporary Pod that mounts the volume and opens a file browser. This allows you to view/upload/download files directly to the volume. Click "Close" to delete the temporary Pod when done.
- Delete: Click the delete icon (trash can) next to a volume to remove it. Note: This permanently deletes the data.
- Filter: Filter volumes by name, status, or storage class using the search bar.
Use a Volume in Notebooks
To use a volume in a Notebook Server:
- When creating a New Notebook, create a standard Workspace Volume (mounted at
/home/jovyan) or... - Scroll to Data Volumes to attach additional existing volumes.
- Click Attach Existing Volume and select your volume.
- Specify the Mount Path (e.g.,
/home/jovyan/data).
Use Kubeflow KServe Endpoints
The KServe Endpoints UI allows you to deploy, manage, and monitor inference services for your machine learning models directly from the Kubeflow dashboard.
Access the Endpoints UI
- Click KServe Endpoints in the central dashboard sidebar.
- Select your namespace at the top of the page.
- You will see a list of deployed InferenceServices with their status and URLs.
Deploy a New Model
-
New Endpoint: Click New Endpoint.
-
InferneceService YAML:
- Provide the YAML definition for your InferenceService. You can use the sample YAML below as a template.
-
Deploy: Click Create.
Monitor and Test
After deployment, wait for the status to become Ready.
- Inspect: Click on the model name to see YAML details, logs.
- Get URL: Copy the provided endpoint URL (e.g.,
http://model-name.namespace.svc.cluster.local/v1/models/model-name:predictor external URL). - Test: Use
curlor a Python client to send a prediction request.