Client for interacting with the Databricks Clusters/Compute API.
More...
#include <compute.h>
Client for interacting with the Databricks Clusters/Compute API.
The Clusters API allows you to create, manage, and control compute clusters in your Databricks workspace. This implementation uses Clusters API 2.0.
Example usage:
auto cluster_list = compute.list_compute();
auto cluster = compute.get_compute("1234-567890-abcde123");
compute.start_compute("1234-567890-abcde123");
Client for interacting with the Databricks Clusters/Compute API.
Core authentication configuration shared across all Databricks features.
static AuthConfig from_environment(const std::string &profile="DEFAULT")
Load authentication configuration from all available sources.
Definition at line 37 of file compute.h.
◆ Compute() [1/3]
| databricks::Compute::Compute |
( |
const AuthConfig & |
auth | ) |
|
|
explicit |
Construct a Compute API client.
- Parameters
-
| auth | Authentication configuration with host and token |
◆ Compute() [2/3]
| databricks::Compute::Compute |
( |
std::shared_ptr< internal::IHttpClient > |
http_client | ) |
|
|
explicit |
Construct a Compute API client with dependency injection (for testing)
- Parameters
-
| http_client | Injected HTTP client (use MockHttpClient for unit tests) |
- Note
- This constructor is primarily for testing with mock HTTP clients
◆ ~Compute()
| databricks::Compute::~Compute |
( |
| ) |
|
◆ Compute() [3/3]
| databricks::Compute::Compute |
( |
const Compute & |
| ) |
|
|
delete |
◆ create_compute()
| bool databricks::Compute::create_compute |
( |
const Cluster & |
cluster_config | ) |
|
Create a new Spark Cluster.
- Parameters
-
| cluster | Create a cluster in Databricks with Cluster Configs |
- Returns
- true if the operation was successful
◆ get_compute()
| Cluster databricks::Compute::get_compute |
( |
const std::string & |
cluster_id | ) |
|
Get detailed information about a specific compute cluster.
- Parameters
-
| cluster_id | The unique identifier of the cluster |
- Returns
- Cluster object with full details
- Exceptions
-
| std::runtime_error | if the cluster is not found or the API request fails |
◆ list_compute()
| std::vector< Cluster > databricks::Compute::list_compute |
( |
| ) |
|
List all compute clusters in the workspace.
- Returns
- Vector of Cluster objects
- Exceptions
-
| std::runtime_error | if the API request fails |
◆ operator=()
◆ restart_compute()
| bool databricks::Compute::restart_compute |
( |
const std::string & |
cluster_id | ) |
|
Restart a compute cluster.
- Parameters
-
| cluster_id | The unique identifier of the cluster to restart |
- Returns
- true if the operation was successful
- Exceptions
-
| std::runtime_error | if the API request fails |
- Note
- This will terminate and then start the cluster with the same configuration.
◆ start_compute()
| bool databricks::Compute::start_compute |
( |
const std::string & |
cluster_id | ) |
|
Start a terminated compute cluster.
- Parameters
-
| cluster_id | The unique identifier of the cluster to start |
- Returns
- true if the operation was successful
- Exceptions
-
| std::runtime_error | if the API request fails |
◆ terminate_compute()
| bool databricks::Compute::terminate_compute |
( |
const std::string & |
cluster_id | ) |
|
Terminate a running compute cluster.
- Parameters
-
| cluster_id | The unique identifier of the cluster to terminate |
- Returns
- true if the operation was successful
- Exceptions
-
| std::runtime_error | if the API request fails |
- Note
- This stops the cluster but does not permanently delete it. Terminated clusters can be restarted.
The documentation for this class was generated from the following file: