Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

Cluster Deployment Architecture

The Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis.

This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging.

This Marketplace application stands up a multi-node Elastic Stack cluster using an automated deployment script configured by Akamai.

Deploying a Marketplace App

The Linode Marketplace lets you easily deploy software on a Compute Instance using Cloud Manager. See Get Started with Marketplace Apps for complete steps.

  1. Log in to Cloud Manager and select the Marketplace link from the left navigation menu. This displays the Linode Create page with the Marketplace tab pre-selected.

  2. Under the Select App section, select the app you would like to deploy.

  3. Complete the form by following the steps and advice within the Creating a Compute Instance guide. Depending on the Marketplace App you selected, there may be additional configuration options available. See the Configuration Options section below for compatible distributions, recommended plans, and any additional configuration options available for this Marketplace App.

  4. Click the Create Linode button. Once the Compute Instance has been provisioned and has fully powered on, wait for the software installation to complete. If the instance is powered off or restarted before this time, the software installation will likely fail.

To verify that the app has been fully installed, see Get Started with Marketplace Apps > Verify Installation. Once installed, follow the instructions within the Getting Started After Deployment section to access the application and start using it.

Estimated deployment time
Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters may take longer to provision, and you can use the formula, 8 minutes per 5 nodes, to estimate completion time.

Configuration Options

Elastic Stack Options

  • Linode API Token (required): Your API token is used to deploy additional Compute Instances as part of this cluster. At a minimum, this token must have Read/Write access to Linodes. If you do not yet have an API token, see Get an API Access Token to create one.

  • Email address (for the Let’s Encrypt SSL certificate) (required): Your email is used for Let’s Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This allows you to visit Kibana securely through a browser.

Limited Sudo User

You need to fill out the following fields to automatically create a limited sudo user, with a strong generated password for your new Compute Instance. This account will be assigned to the sudo group, which provides elevated permissions when running commands with the sudo prefix.

  • Limited sudo user: Enter your preferred username for the limited user. No Capital Letters, Spaces, or Special Characters.

    Locating The Generated Sudo Password

    A password is generated for the limited user and stored in a .credentials file in their home directory, along with application specific passwords. This can be viewed by running: cat /home/$USERNAME/.credentials

    For best results, add an account SSH key for the Cloud Manager user that is deploying the instance, and select that user as an authorized_user in the API or by selecting that option in Cloud Manager. Their SSH pubkey will be assigned to both root and the limited user.

  • Disable root access over SSH: To block the root user from logging in over SSH, select Yes. You can still switch to the root user once logged in, and you can also log in as root through Lish.

    Accessing The Instance Without SSH
    If you disable root access for your deployment and do not provide a valid Account SSH Key assigned to the authorized_user, you will need to login as the root user via the Lish console and run cat /home/$USERNAME/.credentials to view the generated password for the limited user.
Warning
Do not use a double quotation mark character (") within any of the App-specific configuration fields, including user and database password fields. This special character may cause issues during deployment.

TLS/SSL Certificate Options

The following fields are used when creating the self-signed TLS/SSL certificates for the cluster.

  • Country or region (required): Enter the country or region for you or your organization.
  • State or province (required): Enter the state or province for you or your organization.
  • Locality (required): Enter the town or other locality for you or your organization.
  • Organization (required): Enter the name of your organization.
  • Email address (required): Enter the email address you wish to use for your certificate file.
  • CA Common name: This is the common name for the self-signed Certificate Authority.

Picking the Correct Instance Plan and Size

In the Cluster Settings section you can designate the size for each component in your Elastic deployment. The size of the cluster depends on your needs–if you are looking for a faster deployment, stick with the defaults provided.

  • Kibana Size: This deployment creates a single Kibana instance with Let’s Encrypt certificates. This option cannot be changed.
  • Elasticsearch Cluster Size: The total number of nodes in your Elasticsearch cluster.
  • Logstash Cluster Size: The total number of nodes in your Logstash cluster.

Next, associate your Elasticsearch and Logstash clusters with a corresponding instance plan option.

  • Elasticsearch Instance Type: This is the plan type used for your Elasticsearch cluster.
  • Logstash Instance Type: This is the plan type used for your Logstash cluster.
Kibana instance type
In order to choose the Kibana instance, you first need to select a deployment region and then pick a plan from the Linode Plan section.

Additional Configuration

  • Filebeat IP addresses allowed to access Logstash: If you have existing Filebeat agents already installed, you can provide their IP addresses for an allowlist. The IP addresses must be comma separated.

  • Logstash username to be created for index: This is the username that is created and can access index below. This is created so that you can begin ingesting logs after deployment.

  • Elasticsearch index to be created for log ingestion: This lets you start ingesting logs. Edit the index name for your specific use-case. For example, if you have WordPress application you want to perform log aggregation for, the index name wordpress-logs would be appropriate.

Getting Started After Deployment

Accessing Elastic Frontend

Once you cluster has finished deploying, you can log into your Elastic cluster using your local browser.

  1. Log into the provisioner node as your limited sudo user, replacing USER with the sudo username you created, and IP_ADDRESS with the instance’s IPv4 address:

    ssh USER@IP_ADDRESS
    The provisioner node is also the Kibana node
    Your provisioner node is the first Linode created in your cluster and is also the instance running Kibana. To identify the node in your list of Linodes, look for the node appended with the name “kibana”. For example: kibana-76f0443c
  2. Open the .credentials file with the following command. Replace USER with your sudo username:

    sudo cat /home/USER/.credentials
  3. In the .credentials file, locate the Kibana URL. Paste the URL into your browser of choice, and you should be greeted with a login page.

    Elastic Login Page

  4. To access the console, enter elastic as the username along with the password posted in the .credentials file. A successful login redirects you to the welcome page. From there you are able to add integrations, visualizations, and make other config changes.

Configure Filebeat (Optional)

Follow the next steps if you already have Filebeat configured on a system.

  1. Create a backup of your /etc/filebeat/filebeat.yml configuration:

    cp /etc/filebeat/filebeat.yml{,.bak}
  2. Update your Filebeat inputs:

    File: /etc/filebeat/filebeat.yml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    filebeat.inputs:
    
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input-specific configurations.
    
    # filestream is an input for collecting log messages from files.
    - type: filestream
    
      # Unique ID among all inputs, an ID is required.
      id: web-01
    
      # Change to true to enable this input configuration.
      #enabled: false
      enabled: true
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /var/log/apache2/access.log

    In this example, the id must be unique to the instance so you know the source of the log. Ideally this should be the hostname of the instance, and this example uses the value web-01. Update paths to the log that you want to send to Logstash.

  3. While in the /etc/filebeat/filebeat.yml, update the Filebeat output directive:

    File: /etc/filebeat/filebeat.yml
    1
    2
    3
    4
    5
    6
    7
    
    output.logstash:
      # Logstash hosts
      hosts: ["logstash-1.example.com:5044", "logstash-2.example.com:5044"]
      loadbalance: true
    
      # List of root certificates for HTTPS server verifications
      ssl.certificate_authorities: ["/etc/filebeat/certs/ca.pem"]

    The hosts param can be the IP addresses of your Logstash host or a FQDN. In this example, logstash-1.example.com and logstash-2.example.com are added to the /etc/hosts file.

  4. Add a Certificate Authority (CA) certificate by adding the contents of ca.crt to your /etc/filebeat/certs/ca.pem file.

    To obtain your ca.crt, open a separate terminal session, and log into your Kibana node. Navigate to the /etc/kibana/certs/ca directory, and view the file contents with the cat command:

    cd /etc/kibana/certs/ca
    sudo cat ca.crt

    Copy the file contents, and add it to your ca.pem file on your Filebeat system.

  5. Once you’ve added the certificate to your ca.pem file, restart the Filebeat service:

    systemctl start filebeat
    systemctl enable filebeat

Once complete, you should be able to start ingesting logs into your cluster using the index you created.

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.