How to Install BigQuery MCP Server

Community Article Published April 22, 2025

In the rapidly evolving landscape of artificial intelligence and large language models (LLMs), the ability for models to access and reason over specific, up-to-date, and relevant data is paramount. While models are trained on vast datasets, they often lack real-time access to proprietary databases, organizational knowledge, or specific structured information required for accurate and context-aware responses.

This is where the Model Context Protocol (MCP) comes into play. MCP is designed as a standardized way for AI models to request and receive contextual information from various data sources. It acts as an intermediary layer, defining a common language for models to ask questions like "What are the sales figures for Q3?" or "Retrieve customer details for user ID 12345" and for data providers to respond with structured, relevant information.

One of the most powerful and widely used data warehouses is Google BigQuery. It allows organizations to store and analyze massive datasets efficiently. Imagine empowering your AI models to directly query and leverage the insights stored within your BigQuery tables. This is precisely what the BigQuery MCP Server enables. It acts as a specific implementation of an MCP server that connects directly to Google BigQuery, allowing authorized models (or applications acting on their behalf) to fetch context dynamically from your data warehouse.

Tired of Postman? Want a decent postman alternative that doesn't suck?

Apidog is a powerful all-in-one API development platform that's revolutionizing how developers design, test, and document their APIs.

Unlike traditional tools like Postman, Apidog seamlessly integrates API design, automated testing, mock servers, and documentation into a single cohesive workflow. With its intuitive interface, collaborative features, and comprehensive toolset, Apidog eliminates the need to juggle multiple applications during your API development process.

Whether you're a solo developer or part of a large team, Apidog streamlines your workflow, increases productivity, and ensures consistent API quality across your projects.

image/png

By deploying this server, you create a secure and standardized bridge between your AI applications and your BigQuery data, enabling functionalities like:

  1. Real-time Data Access: Models can query current data directly from BigQuery tables.
  2. Contextual Enrichment: Enhance model prompts or responses with specific data points from your warehouse.
  3. Data-Grounded Reasoning: Allow models to perform analysis or answer questions based on actual data rather than just their training knowledge.
  4. Standardized Integration: Utilize the MCP standard for consistent interaction across different models and context sources.

This tutorial will guide you step-by-step through the process of installing and configuring the BigQuery MCP Server developed by Ergüt. We will cover the prerequisites, environment setup, installation process, configuration, and basic steps to run and test the server. This guide assumes you have some familiarity with command-line interfaces, Google Cloud Platform (GCP), and the Go programming language environment.

You can find the source code and original documentation for the project here: https://github.com/ergut/mcp-bigquery-server

Let's begin setting up your BigQuery context provider!


Prerequisites

Before diving into the installation, ensure you have the following components set up and ready:

  1. Google Cloud Platform (GCP) Account: You need an active GCP account with billing enabled. The server interacts directly with BigQuery, a GCP service.
  2. GCP Project: A designated GCP project where your BigQuery datasets reside or where you intend to run queries. Note down your Project ID.
  3. BigQuery API Enabled: Ensure the BigQuery API is enabled within your chosen GCP project. You can usually enable APIs via the GCP Console under "APIs & Services" > "Library".
  4. Service Account with Permissions: This is crucial for authenticating the MCP server with Google Cloud securely. You'll need to:
    • Create a Service Account in your GCP project.
    • Grant it appropriate IAM (Identity and Access Management) roles to interact with BigQuery. Essential roles typically include:
      • roles/bigquery.dataViewer (to read data)
      • roles/bigquery.jobUser (to run jobs/queries)
      • Consider roles/bigquery.dataEditor or roles/bigquery.user depending on the specific query needs, but always adhere to the principle of least privilege.
    • Generate and download a Service Account Key in JSON format. This file contains the credentials the server will use. Keep this file secure!
  5. Go (Golang) Environment: The server is written in Go. You need Go installed on the machine where you plan to build and run the server. Version 1.18 or later is generally recommended (refer to the project's go.mod file if specific version requirements exist). You can download and install Go from the official website (golang.org). Verify your installation by running go version in your terminal.
  6. Git: You need Git installed to clone the project repository from GitHub. Most operating systems have Git pre-installed or easily installable. Verify with git --version.
  7. gcloud CLI (Optional but Recommended): The Google Cloud SDK, specifically the gcloud command-line tool, is helpful for managing GCP resources and potentially for authentication, although the server primarily uses the service account key file.

Step 1: Setting Up the Google Cloud Environment

This step focuses on preparing your GCP project and credentials.

  1. Select or Create a GCP Project: Log in to the Google Cloud Console. Select an existing project from the dropdown menu at the top or create a new one. Note your Project ID.
  2. Enable BigQuery API:
    • Navigate to "APIs & Services" > "Library".
    • Search for "BigQuery API".
    • Select it and click "Enable". If it's already enabled, you're good to go.
  3. Create a Service Account:
    • Navigate to "IAM & Admin" > "Service Accounts".
    • Click "+ CREATE SERVICE ACCOUNT".
    • Give it a descriptive name (e.g., mcp-bigquery-server-sa) and an optional description.
    • Click "CREATE AND CONTINUE".
  4. Grant IAM Roles:
    • In the "Grant this service account access to project" section, click "+ ADD ROLE".
    • Search for and select "BigQuery Data Viewer".
    • Click "+ ADD ANOTHER ROLE".
    • Search for and select "BigQuery Job User".
    • Note: Add other roles only if necessary for your specific queries (e.g., if models need to modify data, which is less common for context retrieval). Always grant the minimum required permissions.
    • Click "CONTINUE".
    • Skip the "Grant users access to this service account" step (unless needed for other reasons) and click "DONE".
  5. Generate Service Account Key:
    • Find the service account you just created in the list.
    • Click on the email address of the service account.
    • Go to the "KEYS" tab.
    • Click "ADD KEY" > "Create new key".
    • Select "JSON" as the key type.
    • Click "CREATE".
    • A JSON key file will be automatically downloaded to your computer. Treat this file like a password! Store it securely and note its location (you'll need the path later). Let's assume you save it as ~/keys/gcp-mcp-service-account.json.

Step 2: Installing Local Dependencies

Ensure Go and Git are installed on the machine where you will run the server.

  1. Install Go: If not already installed, visit golang.org/dl and follow the instructions for your operating system. Verify the installation:
    go version
    
  2. Install Git: If not already installed, visit git-scm.com/downloads. Verify the installation:
    git --version
    

Step 3: Cloning the Repository

Now, retrieve the server's source code from GitHub.

  1. Open your terminal or command prompt.
  2. Navigate to the directory where you want to store the project (e.g., ~/projects).
  3. Clone the repository using Git:
    git clone https://github.com/ergut/mcp-bigquery-server.git
    
  4. Change into the newly created project directory:
    cd mcp-bigquery-server
    

Step 4: Configuration

The server uses a config.yaml file to manage its settings, including GCP credentials and server parameters.

  1. Locate the Sample Configuration: Inside the mcp-bigquery-server directory, you should find a sample or template configuration file, likely named config.yaml.example or similar. If it's named exactly config.yaml, make a copy to edit. If it's an example file, rename or copy it to config.yaml.

    # If config.yaml.example exists:
    cp config.yaml.example config.yaml
    # Or if config.yaml exists and you want to preserve the original:
    # cp config.yaml config.yaml.backup
    
  2. Edit config.yaml: Open the config.yaml file in a text editor. You'll need to update the following fields:

    • project_id: Replace the placeholder value with your actual GCP Project ID noted in Step 1.
    • service_account_key_path: Provide the full, absolute path to the JSON service account key file you downloaded in Step 1 (e.g., /home/user/keys/gcp-mcp-service-account.json or C:\Users\User\keys\gcp-mcp-service-account.json). Ensure the user running the server process has read permissions for this file.
    • port: Specify the network port on which the MCP server should listen for incoming requests. The default is often 8080 or 9090. Ensure this port is not already in use on your system.
    • allowed_origins (If present): Configure CORS (Cross-Origin Resource Sharing) settings if requests will come from web browsers on different domains. Often defaults to ["*"] for development, but restrict this in production.
    • dataset_allowlist / table_allowlist (If present): These are important security features. They might allow you to restrict which BigQuery datasets or specific tables the MCP server is allowed to query, preventing accidental or malicious access to sensitive data. Configure these based on your needs if the options are available in the config file structure.
    • Other parameters (e.g., timeout, max_rows): Adjust any other performance or security-related parameters as needed, based on the documentation within the config file or the project's README.
  3. Example config.yaml:

    # GCP Configuration
    project_id: "your-gcp-project-id" # Replace with your Project ID
    service_account_key_path: "/path/to/your/gcp-mcp-service-account.json" # Replace with the full path to your key file
    
    # Server Configuration
    port: 9090 # Port the server will listen on
    # allowed_origins: # Optional: Configure CORS if needed
    #  - "http://localhost:3000"
    #  - "https://your-frontend-app.com"
    
    # Query Configuration (Example - check actual config structure)
    # dataset_allowlist: # Optional: Restrict accessible datasets
    #  - "my_allowed_dataset"
    # table_allowlist: # Optional: Restrict accessible tables (format might vary)
    #  - "my_allowed_dataset.my_allowed_table"
    
    # timeout_seconds: 30 # Optional: Max query duration
    # max_rows_return: 1000 # Optional: Limit rows returned
    
  4. Save the config.yaml file.


Step 5: Building the Server

With the code downloaded and configured, you can now compile the Go application into an executable binary.

  1. Ensure you are still in the mcp-bigquery-server directory in your terminal.
  2. Run the Go build command:
    go build .
    
    This command compiles the Go source files (.go) in the current directory. It automatically downloads any necessary dependencies listed in the go.mod file.
  3. If the build is successful, you will find a new executable file in the directory. On Linux/macOS, it will likely be named mcp-bigquery-server. On Windows, it will be mcp-bigquery-server.exe.

Step 6: Running the Server

Now you can start the compiled MCP server.

  1. From the mcp-bigquery-server directory in your terminal, execute the binary:
    • On Linux/macOS:
      ./mcp-bigquery-server
      
    • On Windows:
      .\mcp-bigquery-server.exe
      
  2. Check the Output: The server should start and print log messages to the console. Look for messages indicating:
    • Loading configuration from config.yaml.
    • Successful authentication with Google Cloud (usually happens on the first query, but initialization might log related info).
    • The server is listening on the configured port (e.g., Server listening on :9090).
  3. Keep it Running: The server will run in the foreground, occupying your terminal session. To stop it, press Ctrl + C. For long-term running (especially in production), you should use a process manager like systemd (Linux), launchd (macOS), or run it within a container (Docker). You can also run it in the background using nohup ./mcp-bigquery-server & on Linux/macOS, but this is less robust than using a proper process manager.

Step 7: Testing the Server (Basic)

To verify the server is operational, you can send a basic MCP request. The exact structure of an MCP request can vary, but it typically involves a POST request to a specific endpoint (often /context) with a JSON payload specifying the query.

Refer to the ergut/mcp-bigquery-server README or the MCP specification for the exact request format expected by this server.

Assuming a common pattern, you might test using a tool like curl:

  1. Open a new terminal window (leave the server running in the first one).

  2. Construct a curl command. You'll likely need to send a POST request with a JSON body containing the BigQuery SQL query.

    Example (Hypothetical - Adapt based on actual server implementation): Let's say you want to query SELECT name, number FROM my_dataset.my_table LIMIT 10. The MCP request might look something like this:

    curl -X POST http://localhost:9090/context \
         -H "Content-Type: application/json" \
         -d '{
               "requester": { "client_id": "test-client" },
               "requested_context": {
                 "type": "bigquery/sql",
                 "query": "SELECT name, number FROM `your-gcp-project-id.my_dataset.my_table` LIMIT 10"
               }
             }'
    

    Important:

    • Replace http://localhost:9090/context with the correct URL and port if different.
    • Replace the query value with a valid BigQuery SQL query targeting a table accessible by your service account. Use the full project.dataset.table format, ensuring it's enclosed in backticks (`) if needed.
    • The JSON structure (requester, requested_context, type, query) is hypothetical. Check the project's documentation for the correct format.
  3. Examine the Response: If successful, the server should return a JSON response containing the data retrieved from BigQuery, likely structured according to the MCP specification (e.g., within a context_data field).

  4. Check Server Logs: Look at the terminal where the server is running. You should see log entries indicating an incoming request, the query being executed against BigQuery, and the response being sent. Errors here (e.g., authentication failures, SQL syntax errors, table not found) are crucial for debugging.


Troubleshooting Common Issues

  • Permission Denied / Authentication Errors:
    • Verify the service_account_key_path in config.yaml is correct and the file is readable.
    • Ensure the service account has the necessary IAM roles (BigQuery Data Viewer, BigQuery Job User) in the correct GCP project.
    • Check if the BigQuery API is enabled.
  • Connection Refused / Server Not Reachable:
    • Confirm the server is running and listening on the expected port (check logs).
    • Ensure the port specified in config.yaml and used in your test request matches.
    • Check for firewalls (local or network) that might be blocking access to the port.
  • config.yaml Not Found:
    • Make sure the configuration file is named exactly config.yaml and is located in the same directory from which you are running the executable.
  • Go Build Errors:
    • Ensure your Go environment is set up correctly (GOPATH, GOROOT, Go version).
    • Check internet connectivity, as go build needs to download dependencies.
    • Look for specific error messages – they often indicate syntax errors or missing packages (though Go modules usually handle this).
  • BigQuery Errors (e.g., Table Not Found, Invalid Query):
    • Check the SQL syntax in your test request.
    • Verify the table name (project.dataset.table) is correct and exists.
    • Ensure the service account has permission to access that specific dataset/table.
    • Check if any dataset_allowlist or table_allowlist in config.yaml is preventing access.

Security Considerations

  • Service Account Key: The JSON key file is highly sensitive. Protect it using file system permissions. Avoid committing it to version control (add it to .gitignore). Consider using secret management systems (like GCP Secret Manager, HashiCorp Vault) for production deployments instead of storing keys directly on disk.
  • Network Access: By default, the server might listen on 0.0.0.0 (all network interfaces). In production, bind it to a specific interface (localhost if only local access is needed) or use firewalls to restrict access to only trusted clients (e.g., your application servers hosting the LLMs).
  • Allowlisting: Utilize dataset/table allowlisting features if available in the configuration to limit the scope of data the server can access.
  • HTTPS: For production, run the server behind a reverse proxy (like Nginx or Caddy) configured with TLS/SSL certificates to encrypt traffic (HTTPS).
  • Authentication/Authorization: While the server authenticates to GCP, consider adding an authentication layer to the MCP server itself (e.g., API keys, JWT tokens) if it's exposed to untrusted networks, ensuring only authorized models or applications can request context.

Conclusion

You have now successfully installed, configured, and run the BigQuery MCP Server. This server acts as a vital component in enabling your AI models to interact directly and securely with your data stored in Google BigQuery, leveraging the standardized Model Context Protocol.

By following these steps, you've created a pathway for richer, data-grounded interactions with your AI systems. Remember to consult the specific project's documentation (ergut/mcp-bigquery-server) for detailed information on the exact MCP request/response formats it supports and any advanced configuration options. As you move towards production, pay close attention to security best practices and robust deployment strategies using process managers or containerization.

Community

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment