I’ve been experimenting with sqlc for 2 tiny personal projects. My use cases for these tiny projects are sqlite3, run as a cli and single user. I’ve had relative success with sqlboiler with personal projects and enjoy working with it.
sqlc
feels a little bit too sparse for me right now. It has embedding support though does not support sqlite3 yet. When returning multiple rows from a JOIN, sqlc wont return a slice of structs from the JOIN. I believe this captures the problem but unfortunately it’s closed.
I’m going to pause on my experimentation with sqlc
for now and go back to sqlboiler
for personal projects.
I’ve used stow for a while to
manage my dotfiles
but recently have moved to ansible
.
One problem I had with stow
was how to handle work and personal
dotfiles. Consider gitconfig
where user.email
would be my personal
email address for my personal machines and my work email address for
my work machine. Using ansible templates to handle this using a
single gitconfig
template is the solution that I’m finding is
working nicely for me right now.
Install xtables and kernel headers:
$ sudo apt install dev-scripts linux-headers-`uname -r` xtables-addons-common xtables-addons-source xtables-addons-dkms
xtables-addons-dkms fails to build. Most likely due to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014680.
Build the package:
$ cd xtables-addons-3.21
$ debuild -b -uc -us
In the parent directory you should now have 4 deb packages:
$ ls -1 *.deb
xtables-addons-common-dbgsym_3.21-1_arm64.deb
xtables-addons-common_3.21-1_arm64.deb
xtables-addons-dkms_3.21-1_all.deb
xtables-addons-source_3.21-1_all.deb
Install the xtables-addons-dkms
and xtables-addons-common` packages:
$ sudo dpkg -i xtables-addons-dkms_3.21-1_all.deb xtables-addons-common_3.21-1_arm64.deb
This will build the xtables-addons kernel modules within /lib/modules/$(uname -r)/updates/dkms
:
$ ls -1 /lib/modules/$(uname -r)/updates/dkms/xt_*.ko
/lib/modules/5.14.0-0.bpo.2-arm64/updates/dkms/xt_ACCOUNT.ko
/lib/modules/5.14.0-0.bpo.2-arm64/updates/dkms/xt_CHAOS.ko
[...]
Add the accounting config to /etc/shorewall/accounting
:
ACCOUNT(int-ext,10.0.1.0/24) - eth1 eth0
ACCOUNT(int-ext,10.0.1.0/24) - eth0 eth1
Download the focal-server-cloudimg-amd64-disk-kvm.img image:
$ wget https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64-disk-kvm.img
The following terraform config can exist within a single main.tf
file.
Create the terraform configuration resource block to define the minimum terraform version and required providers:
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.3"
}
}
}
Create a provider configuration which defines how terraform will connect to libvirtd:
provider "libvirt" {
uri = "qemu:///system"
}
Setup a pool
and volume
, used to store the Ubuntu 20.04 cloud image:
resource "libvirt_pool" "ubuntu20" {
name = "ubuntu20"
type = "dir"
path = "./terraform-provider-libvirt-pool-ubuntu"
}
resource "libvirt_volume" "ubuntu20" {
name = "ubuntu20"
pool = libvirt_pool.ubuntu20.name
source = "./focal-server-cloudimg-amd64-disk-kvm.img"
format = "qcow2"
}
At this point, you should be able to run terraform init
and terraform apply
to create the pool and volume resources:
$ terraform init
$ terraform apply
Using virsh, you can confirm that both resources have been created:
$ virsh pool-info ubuntu20
Name: ubuntu20
UUID: 558f8e2c-b9cb-46e6-9311-6468531322a8
State: running
Persistent: yes
Autostart: yes
Capacity: 68.17 GiB
Allocation: 45.58 GiB
Available: 22.59 GiB
$ virsh vol-info --pool ubuntu20 ubuntu20
Name: ubuntu20
Type: file
Capacity: 2.20 GiB
Allocation: 528.44 MiB
Create a user_data.cfg
file, adding a user to allow you to login:
#cloud-config
ssh_pwauth: True
users:
- name: user1
groups: sudo
sudo: ['ALL=(ALL) NOPASSWD:ALL']
plain_text_passwd: passw0rd
lock_passwd: false
Create a network_config.cfg
file which will be used by the guest for network configuration:
version: 2
ethernets:
ens3:
dhcp4: true
Create a meta_data.cfg
file which will be used to pass in data to cloudinit:
local-hostname: ubuntu20.local
Using terraforms templatefile function to render the user_data.cfg
, network_config.cfg
and meta_data.cfg
files, create a libvirt_cloudinit_disk:
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
user_data = templatefile("${path.module}/user_data.cfg", {})
network_config = templatefile("${path.module}/network_config.cfg", {})
meta_data = templatefile("${path.module}/meta_data.cfg", {})
pool = libvirt_pool.ubuntu20.name
}
See the How libvirt_cloudinit_disk works if you’re curious on how the libvirt_cloudinit_disk works with cloudinit.
Create a network, the guest domain using the libvirt_domain
resource and an output
resource that provides us with the guests IP address:
resource "libvirt_network" "lab" {
name = "lab"
domain = "lab.local"
mode = "nat"
addresses = ["10.0.100.0/24"]
}
resource "libvirt_domain" "ubuntu20" {
name = "ubuntu20"
memory = "512"
vcpu = 1
cloudinit = libvirt_cloudinit_disk.commoninit.id
network_interface {
network_name = "lab"
wait_for_lease = true
}
console {
type = "pty"
target_port = "0"
target_type = "serial"
}
console {
type = "pty"
target_type = "virtio"
target_port = "1"
}
disk {
volume_id = libvirt_volume.ubuntu20.id
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
output "ip" {
value = libvirt_domain.ubuntu20.network_interface[0].addresses[0]
}
Initialize the terraform config and apply:
$ terraform init
$ terraform apply
Grab the IP address from the output of terraform apply
or terraform refresh
and SSH using the account created within user_data.cfg
:
$ ssh [email protected]
You can use make to help re-create the terraform resources with a single command.
Consider this Makefile
:
#!/usr/bin/env make
all: terraform-destroy terraform-apply
terraform-apply: terraform-init
terraform apply -auto-approve
terraform-destroy: terraform-init
terraform destroy -auto-approve
terraform-init:
terraform init
.PHONY: all terraform-apply terraform-destroy terraform-init
Running the single command make
will now destroy and re-create the terraform resources.
The libvirt_cloudinit_disk
resource creates an ISO 9660 file using mkisofs
and uploads the file to the ubuntu20
pool
as a volume
.
mkisofs
doesn’t ship with Debian 10, so if you’re host system is running Debian 10, you will have to provide an alternative:
$ sudo apt install xorriso
$ sudo update-alternatives --install /usr/bin/mkisofs mkisofs /usr/bin/xorrisofs 10
The ISO 9660 file is mounted as a cdrom device within the guest domain:
$ virsh dumpxml ubuntu20
[...]
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='./terraform-provider-libvirt-pool-ubuntu/commoninit.iso'/>
<backingStore/>
<target dev='hdd' bus='ide'/>
<readonly/>
<alias name='ide0-1-1'/>
<address type='drive' controller='0' bus='1' target='0' unit='1'/>
</disk>
[...]
cloudinit allows users to provide user, network and meta data files to the instance using a NoCloud data source which can be an ISO 9660 filesystem which has the volume label cidata
or CIDATA
.
$ blkid /dev/sr0
/dev/sr0: UUID="2021-01-03-01-59-49-00" LABEL="cidata" TYPE="iso9660"
This device can be mounted, showing that it contains the user_data.cfg
, network_data.cfg
and meta_data.cfg
files that were rendered by templatefile:
$ sudo mount /dev/sr0 /media
$ ls -la /media/*
-rwxr-xr-x 1 user1 user1 31 Jan 3 01:59 /media/meta-data
-rwxr-xr-x 1 user1 user1 46 Jan 3 01:59 /media/network-config
-rwxr-xr-x 1 user1 user1 163 Jan 3 01:59 /media/user-data