Ledger Nano On KVM

In order to keep my cryptocurrencies as secure as possible, I only interact with those within a virtual machine located on an encrypted USB stick. I own both a Ledger Nano S and a Ledger Nano X, which connect using USB. Also I don’t use libvirt for this as I want it to be as easily and quickly usable as possible. So here’s the secret formula in order to access those hardware wallets from a GNU/Linux KVM VM via USB pass through:
Svg Vector Icons : http://www.onlinewebfonts.com/icon

QEMU/KVM Shorter Command Line

I keep reading overcomplicated QEMU/KVM command lines, when really, to start a VirtIO disk and bridged VirtIO NIC virtual machine, only this command is needed: $ sudo qemu-system-x86_64 -enable-kvm -m 1024 -cpu host -daemonize \ -drive file=mydisk.img,if=virtio \ -net nic,model=virtio -net tap,ifname=tap0 drive type is virtio nic model is virtio and the interface is of tap type, this will summon /etc/qemu-ifup to attach the interface to your bridge. Depending on your QEMU installation, either this will fire up a window showing your virtual machine booting, or start a VNC server on port 5900.

Kubernetes under my desk

I’m diving into Kubernetes for a couple of months now. Discovering the possibilities and philosophy behind the hype definitely changed my mind. Yes, it is huge (in every sense ;) ) and it does change the way we, ex-sysops / ops / syasdmins do our work. Not tomorrow, not soon, now. I’ve had my hands on various managed kubernetes clusters like GKE (Google Container Engine), EKS (AWS Elastic Container Service) or the more humble minikube but I’m not happy when I don’t understand what a technology is made of.

NetBSD/amd64 7.0 on kvm

If you recently tried to install NetBSD 7.0 using Linux KVM you might have encountered the following failure: This bug have been recently fixed on the 7-branch but the official ISO images are not yet updated, so you’ll have to use NetBSD daily builds mini-ISO which includes Christos fix to bus_dma.c For the record, here’s the virt-install command I use: sudo virt-install --virt-type kvm --name korriban --ram 4096 --disk path=/dev/vms/korriban,bus=virtio --vcpus 2 --network bridge:br0,model=virtio --graphics vnc --accelerate --noautoconsole --cdrom /home/imil/iso/boot.

virt-manager: "nc: unix connect failed"

I came across an annoying behaviour while trying to connect to a remote KVM hypervisor from a FreeBSD GUI. virt-manager failed to connect to the server and showed the following error message: In short, virt-manager tries to access to /usr/local/var/run/libvirt/libvirt-sock because it is compiled with a /usr/local PREFIX on FreeBSD. Of course they didn’t plan anything on a plain text configuration file. I figured out this has to be configured in GConf, for example using gconf-editor, simply replace:

Y'a moyen d's'amuser

Rien dans les manches, rien dans les poches : Hm, oui, et alors ? Attends, attends. Mouais, super, et ? atteeeeends fxp0 ?? comment ça fxp0 ?! Bah parce que : hAOOOOOOOOOOOoooooooooooooooooon et ouais. La doc est ici, et contrairement à ce qu’on peut y lire, je fais tourner un JUNOS 9.3R3.8 (le fameux) sur une base de FreeBSD 6.4. Suivez scrupuleusement la documentation, en particulier les sections Modify jinstall file et Watchdog panic immediately after boot, cette dernière étant indispensable si vous souhaitez, comme moi, faire votre test sous des versions récentes de KVM.

KVM/QEMU, rtl8139 et Segmentation Fault

Depuis un petit moment, je constate que les derniers builds du QEMU contenus dans KVM explosent en vol au démarrage de la VM. Flemmard, je continuais donc à utiliser le QEMU de la version 68 qui, lui, fonctionnait. Et puis ça a fini par vraiment me démanger. Je comprend à grands renforts de gdb que lorsque l’output SDL est activé en même temps que le support d’une carte réseau virtuelle, QEMU coredumpe.

sous les pavés, NetBSD

En echo à ce très bon post montrant une configuration réseau qui fonctionne pour qemu / kvm, je vous propose ma petite sauce. La finalité étant : . Une VM NetBSD 3.1 sur le même LAN que le host . Son fonctionnement en background . Son administration potentielle via VNC en cas de crash à la Xen quoi. Le host est une debian x86, et evidemment le hardware supporte les instructions VT :