# Configuring Environments

{% hint style="info" %}
To **create, edit, or delete** Environments, go to the **side menu → Organization → Settings**.
{% endhint %}

<figure><img src="https://1396010420-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FA0TCjQZ9I4RNH8rs2n5A%2Fuploads%2F94ISs5lLjPa5h5IGrTu6%2FEnviroments.png?alt=media&#x26;token=5c1cc49d-6c71-44a9-ae7e-57bd2ec7d915" alt=""><figcaption><p>Environments Menu</p></figcaption></figure>

Once an environment is created, you can configure the following settings:

1. **Name and Description:** Provide a clear and meaningful name and description to help differentiate between environments.
2. **Branches**
   * **Branch Source**: The Git branch to pull code from.
   * **Branch Destination**: The Git branch where changes will be committed.

**Examples**

* **Production Environment**
  * Branch Source: `main` or `master`
  * Branch Destination: `main` or `master` (same as source)
* **Development Environment**
  * Branch Source: `main` or `master`
  * Branch Destination: `dev`

3. **Connection Settings**\
   Set up the environment’s cloud connection (e.g., AWS or GCP) by selecting the appropriate configuration for that provider.

<details>

<summary>AWS Environments Configurations</summary>

<table><thead><tr><th width="138.43359375">Field</th><th width="220.27734375">Description</th><th>Example</th></tr></thead><tbody><tr><td>Database</td><td>Specify the database (Data catalog) to build models into</td><td><code>awsdatacatalog</code></td></tr><tr><td>Region</td><td>AWS region of your Athena instance</td><td><code>us-east-1</code>, <code>eu-west-1</code></td></tr><tr><td>S3 Staging Directory</td><td>S3 location to store Athena query results and metadata</td><td><code>s3://my_bucket/my_folder/...</code></td></tr><tr><td>Schema</td><td>Specify the schema (Athena database) to build models into (<strong>lowercase only</strong>)</td><td><code>production</code>, <code>development</code>, <code>test</code></td></tr><tr><td>Number of Boto3 Retries</td><td>Number of times to retry boto3 requests (e.g. deleting S3 files for materialized tables)</td><td><code>3</code></td></tr><tr><td>Number of Retries</td><td>Number of times to retry a failing query</td><td><code>3</code></td></tr><tr><td>S3 Data Directory</td><td>Prefix for storing tables, if different from the connection's <strong>S3 Staging Directory</strong></td><td><code>s3://my_bucket/my_folder/...</code></td></tr><tr><td>S3 Data Naming Convention</td><td>How to generate table paths in <code>s3_data_dir</code></td><td><code>schema_table: {s3_data_dir}/{schema3}...</code></td></tr><tr><td>S3 Temp Tables Prefix</td><td>Prefix for storing temporary tables, if different from the connection's <code>s3_data_dir</code></td><td></td></tr><tr><td>Spark Work Group</td><td>Identifier of Athena Spark workgroup for running Python models</td><td></td></tr><tr><td>Number of Threads</td><td>Number of threads to use</td><td><code>4</code></td></tr><tr><td>Work Group</td><td>Identifier of Athena workgroup</td><td></td></tr></tbody></table>

</details>

<details>

<summary>GCP Environment Configurations</summary>

<table><thead><tr><th width="172.0390625">Field</th><th width="289.00390625">Description</th><th>Example</th></tr></thead><tbody><tr><td>Project ID</td><td>The GCP project ID that contains your BigQuery datasets</td><td><code>my-project</code></td></tr><tr><td>Number of Threads</td><td>The number of threads to use for parallel execution</td><td><code>4</code></td></tr><tr><td>Dataset</td><td>The default BigQuery dataset to be used. The same as schema</td><td><code>my-dataset</code></td></tr><tr><td>Priority</td><td>The priority with which to execute BigQuery queries</td><td><code>batch</code></td></tr><tr><td>Job Execution Timeout (Seconds)</td><td>Maximum number of seconds to wait for a query to complete</td><td><code>300</code></td></tr><tr><td>Job Creation Timeout (Seconds)</td><td>Maximum number of seconds to wait when submitting a job</td><td><code>300</code></td></tr><tr><td>Number of Job Retries</td><td>The number of times to retry a failed job</td><td><code>3</code></td></tr><tr><td>Job Retries Deadline (Seconds)</td><td>Maximum time in seconds for a job and its retries before raising an error</td><td><code>300</code></td></tr><tr><td>Location</td><td>The geographical location of your BigQuery dataset</td><td><code>US</code></td></tr><tr><td>Maximum Bytes Billed</td><td>The max number of bytes that can be billed for a given BigQuery query. Queries will fail if they exceed this limit</td><td></td></tr><tr><td>Scopes</td><td>Scopes for authenticating the connection</td><td></td></tr><tr><td>Service Account to Impersonate</td><td>The Google service account to impersonate when making API requests</td><td></td></tr><tr><td>Dataproc Region</td><td>The Google Cloud region for PySpark workloads on Dataproc</td><td><code>us-east1</code>, <code>us-west1</code>, ...</td></tr><tr><td>GCS Bucket Name</td><td>The URI for a Google Cloud Storage bucket to host Python code executed via Dataproc</td><td></td></tr></tbody></table>

</details>
