Skip to content

FortifyData Collector Appliance

Installation Instructions

The FortifyData Collector Appliance is a self-contained Linux appliance built on Fedora bootc. It ships in three formats so you can deploy it to whatever infrastructure you already run:

Format Best for Architectures
ISO Bare metal, USB install, any hypervisor that boots from CD/ISO x86_64, ARM64
OVA VMware vSphere / ESXi, VMware Workstation, VirtualBox x86_64, ARM64
QCOW2 KVM / libvirt / Proxmox, OpenStack, any QEMU host x86_64, ARM64

All three deploy the same appliance and reach the same first-boot state.

Minimum System Requirements

  • CPU: 2 vCPU
  • Memory: 8 GB RAM
  • Disk: 32 GB (500 GB recommended for production data retention)
  • Network: A single NIC with DHCP or a static lease, outbound HTTPS to the internet for updates and (optionally) ACME certificate issuance
  • Firmware: BIOS or UEFI

Deploying the Appliance

Deployment Options

Choose the deployment method that best fits your environment. The ISO is ideal for bare-metal or USB installs, while the OVA and QCOW2 formats are suited for various virtualization platforms.

Use the ISO for bare-metal installs, USB installs, or any hypervisor that can boot from an ISO. The installer is fully unattended — there are no prompts to answer.

  1. Download the ISO for your architecture:

  2. Write the ISO to a USB drive (skip this step if you're booting an ISO directly inside a hypervisor):

    On Linux / macOS:

    sudo dd if=collector-appliance-v1.1.6-amd64.iso of=/dev/sdX bs=4M status=progress conv=fsync
    

    Replace /dev/sdX with the correct device for your USB drive. Be certain — dd will silently destroy the wrong disk.

    On Windows, use Rufus in DD mode.

  3. Boot the target machine from the ISO/USB.

    The installer runs unattended, partitions the disk, installs the appliance, and reboots. No keyboard input is required.

  4. Wait for first boot to complete.

    On first boot the appliance initializes its services and renders a login banner that shows the appliance version, IP address, and the URL to the web UI. This can take up to 2 minutes the first time.

Use the OVA for VMware vSphere / ESXi, VMware Workstation / Player, or VirtualBox.

  1. Download the OVA for your architecture:

  2. Import the OVA:

    • In the vSphere Client, choose Deploy OVF Template.
    • Select the downloaded .ova file and follow the wizard.
    • Place the VM on the desired datastore and network. Defaults are: 2 vCPU, 8 GB RAM, single NIC (E1000), 500 GB thin-provisioned disk.
    • File → Open, select the .ova, accept the defaults.
    • File → Import Appliance, select the .ova, accept the defaults.
    • VirtualBox may warn about the OVF version — it's safe to ignore.
  3. Power on the VM.

    Watch the console for the login banner; it will display the IP address and web UI URL once first boot completes.

Use the QCOW2 image for KVM / libvirt, Proxmox, OpenStack, or any other QEMU-based host.

  1. Download the QCOW2 for your architecture:

  2. Create a VM that uses the image as its disk.

    sudo mv collector-appliance-v1.1.6-amd64.qcow2 /var/lib/libvirt/images/collector.qcow2
    
    sudo virt-install \
        --name collector \
        --memory 8192 \
        --vcpus 2 \
        --disk path=/var/lib/libvirt/images/collector.qcow2,format=qcow2,bus=virtio \
        --os-variant fedora-eln \
        --network bridge=br0,model=virtio \
        --import \
        --noautoconsole
    

    Replace br0 with your bridge or use network=default for NAT.

    • Upload the .qcow2 to local storage (or copy it directly into /var/lib/vz/images/<vmid>/).
    • Create a new VM with no disk, then attach the qcow2 via qm importdisk <vmid> collector-appliance-v1.1.6-amd64.qcow2 local-lvm.
    • Set the imported disk as the boot disk, allocate 2 vCPU and 8 GB RAM, and start the VM.
    openstack image create \
        --disk-format qcow2 \
        --container-format bare \
        --file collector-appliance-v1.1.6-amd64.qcow2 \
        --public \
        collector-appliance-v1.1.6
    
  3. Power on the VM and watch the serial console / VM console for the login banner.