Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 0 additions & 14 deletions .github/aw/actions-lock.json

This file was deleted.

1,280 changes: 0 additions & 1,280 deletions .github/workflows/dependabot-triage-agent.lock.yml

This file was deleted.

129 changes: 0 additions & 129 deletions .github/workflows/dependabot-triage-agent.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
---
title: Adding nodes to a high availability configuration
shortTitle: Adding nodes to HA
intro: 'Add nodes to the primary high availability (HA) datacenter. This is intended to offload CPU-intensive tasks from the primary node, allowing for horizontal scaling of the {% data variables.product.prodname_ghe_server %} instance.'
versions:
ghes: '>= 3.18'
type: how_to
topics:
- High availability
- Enterprise
- Infrastructure
allowTitleToDifferFromFilename: true
---

> [!NOTE]
> The ability to add additional compute nodes to HA is in {% data variables.release-phases.public_preview %} and subject to change. During the preview, please share any feedback with your customer success team.

For {% data variables.product.prodname_ghe_server %} customers looking to scale horizontally, migrating to and operating a cluster is an option, but is resource-intensive and time-consuming. As an alternative, we recommend adding nodes to an HA configuration.

The terms "additional node" and "stateless node" are used interchangeably throughout this article. Stateless nodes can only be added to HA deployments that contain at least one replica.

## Additional nodes

Of all the services running on a {% data variables.product.prodname_ghe_server %} appliance, Unicorn is often the most CPU and memory intensive, closely followed by Aqueduct, Git, and MySQL. Because Unicorn and Aqueduct are stateless services, they are well-suited for horizontal scaling and can run on a separate set of nodes. The remaining services can continue operating with a single instance per datacenter.

Additional nodes allow you to scale web and job workloads horizontally. They can also offload Unicorn and Aqueduct from the primary node, freeing up substantial compute and memory resources for the remaining stateful services. If you are experiencing performance-related outages due to high CPU usage by Unicorn instances, adding additional nodes is recommended. There are no significant restrictions on the number of these nodes you can add within a datacenter.

## Criteria

If you are experiencing degraded performance due to an overloaded primary node in an HA configuration, you should consider adding additional nodes to your HA environment. By scaling web and job roles horizontally beyond the primary node, these extra nodes can help reduce the load on the primary host.

For example, if you notice backlogs in Unicorn or Aqueduct queues, or are experiencing other types of resource contention, you should consider this approach. Even if there isn't visible queuing, running out of CPU on the primary node is another clear signal. In these cases, you can add additional nodes and reduce the number of workers per node, so the primary node handles less of the overall workload.

## Adding a node

Each node you add to an HA deployment is a virtual machine (VM) running the {% data variables.product.prodname_ghe_server %} software. It should be running the same software as the primary. Generally, a stateless node does not need to match the primary's memory, CPU, or storage specifications. However, both the stateless node and the primary instance require sub-millisecond connectivity. Replica connectivity requirements remain unchanged.

To add nodes to the primary datacenter in an HA configuration, use the `ghe-add-node` command. The `ghe-add-node` command sets up the current appliance as a node within the HA deployment, and is intended to offload CPU-intensive tasks from the primary data node, enabling horizontal scaling. These nodes are designed to handle web and job workloads, allowing for more efficient workload distribution and management.
This command takes the form:

``` shell copy
/usr/local/share/enterprise/ghe-add-node PRIMARY_IP [--hostname HOSTNAME]
```

- `PRIMARY_IP`: The IP address of the primary node.
- `HOSTNAME` (optional): Desired hostname for the added host.

For example, to add a node with hostname `ghes-node-1` to the HA primary instance with IP address `192.168.1.1` in the HA primary datacenter, you would run the following command:

``` shell copy
/usr/local/share/enterprise/ghe-add-node 192.168.1.1 --hostname ghes-node-1
```

Then, on the primary node, you must run the following commands:

``` shell copy
ghe-config-apply
ghe-cluster-balance rebalance --yes
```

The `ghe-config-apply` command is a requirement to add stateless nodes.

For the public preview, we have not specifically tested for downtime, and it's not clear if a maintenance window is required.

## Removing an additional node

To remove a node, run `ghe-remove-node` from the node you want to remove. Then, on the primary node, you must run:

``` shell copy
ghe-config-apply
```

The `ghe-config-apply` command is a requirement to remove stateless nodes.

For the public preview, we have not specifically tested for downtime, and it's not clear if a maintenance window is required.

## Reprovisioning a node that previously hosted {% data variables.product.prodname_ghe_server %}

You can use a node that previously hosted and ran {% data variables.product.prodname_ghe_server %} as a stateless node. To do so, the node should be updated to version 3.18 or above and all the nodes in the deployment must be running the same version. On that node, check if `/data/user/common/cluster.conf` already exists. If it does, you will need to perform cleanup before running `ghe-add-node` command on the stateless node.

For example:

``` shell copy
sudo rm -f /etc/github/cluster /data/user/common/cluster.conf
sudo timeout -k4 10 systemctl stop wireguard 2>/dev/null || sudo ip link delete tun0 || true
```

## Limits and behavior

There is no theoretical limit to the number of nodes you can add. However, in practice, adding too many nodes can cause issues and impact stability or performance. At this time, newly added nodes will process a predefined set of tasks. You are not able to choose which type of tasks are offloaded. All APIs can be processed by the additional node.

If a Git operation is in the path, there is logic in place to process Git operations only on the primary node. Git operations are not handled by the additional node. For example, branch deletion is a Git operation, and won't be handled by the stateless node.

Stateless nodes do not run Elasticsearch workloads, but they do run kafka-lite.

## System and networking requirements

Generally, stateless nodes don't need to match the memory, CPU, and storage specs of the primary node. System requirements should take into account the existing resource consumption of web and job services on the primary node, and whether the primary node will completely offload those workloads to the new node.

The stateless node and the primary instance require sub-millisecond connectivity. Generally, all nodes within the primary datacenter require sub-millisecond connectivity. Replica connectivity requirements remain unchanged.

## Traffic routing and request handling

Primary routes the traffic to the additional nodes. In case of multiple stateless nodes, the primary sends new connections to the server with the fewest active connections at that moment.

## Upgrading an HA deployment with additional nodes

The following is an example upgrade sequence:

* Start maintenance window.
* Stop replicas.
* Upgrade stateless nodes in parallel.
* Upgrade the primary node.
* Upgrade the replicas. They can be upgraded in parallel or sequentially depending on your disaster recovery preferences.
* Start replicas.
* Remove maintenance window.

The additional nodes should not cause additional downtime during upgrades.

## Failover and disaster recovery behavior

There is no need to "tear down" additional nodes, as they do not contain any data.

During failover, the replica node is removed from the original deployment and converted to a standalone node. Stateless nodes should be re-attached to the promoted replica, similar to how additional replicas are re-attached after a failover.

If the primary node is functional and you want to promote a replica to be primary, you should remove stateless nodes from the primary with the `ghe-remove-node` command, before re-adding them to the promoted node.

If the primary node is unreachable and unrecoverable, stateless nodes can be re-added without removing them from the original primary.

## Monitoring, logs, and support bundles

On the primary node, the Management Console monitoring dashboards display metrics for all nodes, including the stateless nodes. Commands such as `ghe-cluster-nodes` and `ghe-cluster-status` contain details on stateless nodes. All Management Console requests are served by the primary node.

Logs are stored locally on the stateless nodes. They can be exported from these nodes to third-party log management services.

You can use the `ghe-cluster-support-bundle` and `ghe-support-bundle` commands to generate and upload cluster or single-node bundles.

## Known limitations

This feature is not designed for monorepos, but the addition of new stateless nodes may indirectly improve monorepo operations by reducing web and job workloads on the primary node. There are no autoscaling and scaledown features.
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: Additional nodes
intro: 'You can configure additional nodes to offload stateless workloads from the primary node in your {% data variables.product.prodname_ghe_server %} instance.'
versions:
ghes: '>= 3.18'
topics:
- Enterprise
children:
- /configuring-additional-nodes
---
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,6 @@ children:
- /configuring-high-availability
- /caching-repositories
- /multiple-data-disks
- /additional-nodes
shortTitle: 'Monitor and manage your instance'
---
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ redirect_from:
- /copilot/how-tos/agents/copilot-coding-agent/extend-coding-agent-with-mcp
- /copilot/how-tos/agents/coding-agent/extend-coding-agent-with-mcp
contentType: how-tos
category:
category:
- Integrate Copilot with your tools
---

Expand Down Expand Up @@ -96,6 +96,15 @@ Note that all `string` and `string[]` fields besides `tools` & `type` support su

## Example configurations

The examples below show MCP server configurations for different providers.

* [Sentry](#example-sentry)
* [Notion](#example-notion)
* [Azure](#example-azure)
* [Cloudflare](#example-cloudflare)
* [Azure DevOps](#example-azure-devops)
* [Atlassian](#example-atlassian)

### Example: Sentry

The [Sentry MCP server](https://github.com/getsentry/sentry-mcp) gives {% data variables.product.prodname_copilot_short %} authenticated access to exceptions recorded in [Sentry](https://sentry.io).
Expand Down Expand Up @@ -250,6 +259,39 @@ To use the Azure DevOps MCP server with {% data variables.copilot.copilot_coding
}
```

### Example: Atlassian

The [Atlassian MCP server](https://github.com/atlassian/atlassian-mcp-server) gives {% data variables.product.prodname_copilot_short %} authenticated access to your Atlassian apps, including Jira, Compass, and Confluence.

For more information about authenticating to the Atlassian MCP server using an API key, see [Configuring authentication via API token](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/configuring-authentication-via-api-token/) in the Atlassian documentation.

```javascript copy
// If you copy and paste this example, you will need to remove the comments prefixed with `//`, which are not valid JSON.
{
"mcpServers": {
"atlassian-rovo-mcp": {
"command": "npx",
"type": "local",
"tools": ["*"],
"args": [
"mcp-remote@latest",
"https://mcp.atlassian.com/v1/mcp",
// We can use the $ATLASSIAN_API_KEY environment variable which is passed
// to the server because of the `env` value below.
"--header",
"Authorization: Basic $ATLASSIAN_API_KEY"
],
"env": {
// The value of the `COPILOT_MCP_ATLASSIAN_API_KEY` secret will be passed
// to the server command as an environment variable
// called `ATLASSIAN_API_KEY`.
"ATLASSIAN_API_KEY": "$COPILOT_MCP_ATLASSIAN_API_KEY"
}
}
}
}
```

## Reusing your MCP configuration from {% data variables.product.prodname_vscode %}

If you have already configured MCP servers in {% data variables.product.prodname_vscode_shortname %}, you can leverage a similar configuration for {% data variables.copilot.copilot_coding_agent %}.
Expand Down
3 changes: 2 additions & 1 deletion content/rest/issues/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,9 @@ children:
- /assignees
- /comments
- /events
- /issues
- /issue-dependencies
- /issue-field-values
- /issues
- /labels
- /milestones
- /sub-issues
Expand Down
15 changes: 15 additions & 0 deletions content/rest/issues/issue-field-values.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: REST API endpoints for issue field values
shortTitle: Issue field values
intro: Use the REST API to view and manage issue field values for issues.
versions: # DO NOT MANUALLY EDIT. CHANGES WILL BE OVERWRITTEN BY A 🤖
fpt: '*'
ghec: '*'
ghes: '*'
topics:
- API
autogenerated: rest
allowTitleToDifferFromFilename: true
---

<!-- Content after this section is automatically generated -->
Loading
Loading