The bridged network is a dedicated network interface to a virtual machine that helps virtual machines to connect outside the host machine.
Let us list the available network connections.COPY
nmcli connection show
Output:
NAME UUID TYPE DEVICE
Wired connection 1 fbbdd6f9-0970-354e-8693-ff8050a85c77 ethernet enp0s3
Now, we will create a virtual bridge network br0 with the help of physical interface enp0s3.
sudo nmcli con add ifname br0 type bridge con-name br0
sudo nmcli con add type bridge-slave ifname enp0s3 master br0
Next, we will assign the IP address of the physical interface to the bridge interface as the Bridge network interface will act as the primary network interface of your host system.COPY
sudo nmcli con mod br0 ipv4.addresses 192.168.0.10/24
sudo nmcli con mod br0 ipv4.gateway 192.168.0.1
sudo nmcli con mod br0 ipv4.dns "8.8.8.8","192.168.0.1"
sudo nmcli con mod br0 ipv4.method manual
KVM requires a few additional network settings. So, set them.COPY
sudo nmcli con modify br0 bridge.stp no
sudo nmcli con modify br0 bridge.forward-delay 0
Disable the physical interface and enable the network bridge.COPY
sudo nmcli con down "Wired connection 1" && sudo nmcli con up br0
Run the above command in the system terminal as you may lose SSH sessions when running them remotely.
Finally, check the network connections.COPY
sudo nmcli con show
Output:
NAME UUID TYPE DEVICE
br0 ee117099-4935-4dde-a1f5-4981b0d9585e bridge br0
bridge-slave-enp0s3 492b5c81-e59d-4150-9b24-0348cd0dd87c ethernet enp0s3
Wired connection 1 fbbdd6f9-0970-354e-8693-ff8050a85c77 ethernet --
By: Raj
Category: Linux |
Comments Off on Linux: Create Network Bridge For KVM
fio is available on most distributions as a package with that name. It won’t be installed by default, you will need to get it. You can click apt://fio (Ubuntu) or appstream://fio (Plasma Discover) to install it (on some distributions, anyway).
fio is not at all strait-forward or easy to use. It requires quite a lot of parameters. The ones you want are:
--name to name your test-runs “job”. It’s required.
--eta-newline= forces a new line for every ‘t’ period. You’ll may want --eta-newline=5s
--filename= to specify a filename to write from.
--rw= specifies if you want to a read (--rw=read) or write (--rw=write) test
--size= decides how big of a test-file it should use. --size=2g may be a good choice. A file (specified with --filename=) this size will be created so you will need to have free space for it. Increasing to --size=20g or more may give a better real-world result for larger HDDs.
A small 200 MB file on a modern HDD won’t make the read/write heads move very far. A very big file will.
--io_size= specifies how much I/O fio will do. Settings it to --io_size=10g will make it do 10 GB worth of I/O even if the --size specifies a (much) smaller file.
--blocksize= specifies the block-size it will use, --blocksize=1024k may be a good choice.
--ioengine= specifies a I/O test method to use. There’s a lot to choose from. Run fio --enghelp for a long list. fio is a very versatile tool, whole books can and probably are written about it. libaio, as in --ioengine=libaio is a good choice and it is what we use in the examples below.
--fsync= tells fio to issue a fsync command which writes kernel cached pages to disk every number of blocks specified.
--fsync=1 is useful for testing random reads and writes.
--fsync=10000 can be used to test sequential reads and writes.
--iodepth= specifies a number of I/O units to keep in-flight.
--direct= specifies if direct I/O, which means O_DIRECT on Linux systems, should be used. You want --direct=1 to do disk performance testing.
--numjobs= specifies the number of jobs. One is enough for disk testing. Increasing this is useful if you want to test how a drive performs when many parallel jobs are running.
--runtime= makes fioterminate after a given amount of time. This overrides other values specifying how much data should be read or written. Setting --runtime=60 means that fio will exit and show results after 60 seconds even if it’s not done reading or writing all the specified data. One minute is typically enough to gather useful data.
--group_reporting makes fio group it’s reporting which makes the output easier to understand.
Put all the above together and we have some long commands for testing disk I/O in various ways.
Note: A file --filename= will be created with the specified --size= on the first run. This file will be created using random data due to the way some drives handle zeros. The file can be re-used in later runs if you specify the same filename and size each run.
Testing sequential read speed with very big blocks
The result should be close to what the hard drive manufacturer advertised and they won’t be that far off the guessimates hdparm provides with the -t option. Testing this on a two-drive RAID1 array will result in both drives being utilized:
Note: Many modern SSDs with TLC (Tripple Level Cell) NAND will have a potentially large SLC (Single Level Cell) area used to cache writes. The drives firmware moves that data to the TLC area when the drive is otherwise idle. Doing 10 GB of I/O to a 2 GB during 60 seconds – what the above example does – is not anywhere near enough to account for the SLC cache on such drives.You will probably not be copying 100 GB to a 240 GB SSD on a regular basis so that may have little to no practical significance. However, do know that if you do a test (assuming you have 80 GB free) of a WD Green SSD with 100 GB of I/O to a 80 GB file with a 5 minute (60*5=300) limit you’ll get a lot lower results than you get if you write 10 GB to a 2 GB file. To test yourself, tryfio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=60g --io_size=100g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=300 --group_reportingYou need to increase size (files used for testing), io_size (amount of I/O done) and runtime (length the test is allowed to run to by-pass a drives caches.
Testing random 4K reads
Testing random reads is best done with a queue-depth of just one (--iodepth=1) and 32 concurrent jobs (--numjobs=32).
As these example results show: The difference between an older 5400 RPM HDD and a average low-end SSD is staggering when it comes to random I/O. There is a world of difference between half a megabyte and 284 megabytes per second.
Mixed random 4K read and write
The --rw option randrw tells fio to do both reads and writes. And again, a queue-depth of just one (--iodepth=1) and 32 concurrent jobs (--numjobs=32) will reflect high real-world load. This test will show the absolute worst I/O performance you can expect. Don’t be shocked if a HDD shows performance-numbers that are in the low percentages of what it’s specifications claim it can do.
The following is a very good tutorial it is not all inclusive, but very close:
Based on https://askubuntu.com/a/293029/286776
Installation date: 15-09-2018
Additional notes based on my own experience
The process describes a completely fresh installation with a complete repartitioning, however it should work fine when Windows is already installed (eg. brand new machine with Windows preinstalled).
The process was conducted on Dell’s XPS 15 9570 (2018) with specs:
CPU: i7-8750H
Screen: 4K with Touch
RAM: 16 GB (original) / 32 GB (manually upgraded)
Drive: 512 GB (SK Hynix PC401)
Windows 10 Pro license
BIOS version: 1.3.1
Suprisingly, Ubuntu’s update manager supports BIOS updates out of the box
My installation did not require to disable TPM nor Secure Boot
My installation did not force me to recover Bitlocker after Ubuntu’s installation
Some people report that it was needed in their case
See “Additional notes” for more info about GRUB & Booting into Windows
1. Preparation (using another computer with Ubuntu)
Create Windows installation USB stick
Download .ISO file from Microsoft’s webpage
Create bootable USB using WoeUSB
Ubuntu has an option to “restore” ISO images using Disks utility, but it does not work correctly (Windows installer asks for additional drivers)
I also had to compile WoeUSB because of some weird bug in the default Ubuntu’s (PPA’s) supplied package that would not let me finish the installation process
Create Ubuntu installation USB stick
Download .ISO file from Ubuntu’s webpage
Create bootable USB using “whatever”
Go to BIOS (F2) and switch from SSD’s “RAID mode” to “AHCI mode”
2. Install Windows
Insert newly created bootable USB and start the installation process
Re-partition your drive
My partitioning scheme (devices might have be labeled differently!):
Boot drive for Linux: 1GB
/dev/nvme0n1p1
Windows OS drive: ~75GB
/dev/nvme0n1p5
Will automatically create additional drives before the actual OS drive as soon as you create the first “regular” partition
Windows data drive: ~100GB
/dev/nvme0n1p6
Ubuntu LUKS drive: ~300GB
/dev/nvme0n1p7
Can be created later
Install Windows on the “Windows OS drive”
Boot to Windows after installation, install all updates
Enable BitLocker on “Windows data drive” (“Windows OS drive” was already encrypted)
Create recovery data for both Bitlocker-protected drives and store them somewhere (eg. additional USB)
3. Install Ubuntu
Insert newly created bootable USB and start the installation process
Create LUKS container on “Ubuntu LUKS drive” and “wipe it”:
Windows can be accessed using F12’s boot option menu
It can also be accessed using GRUB’s menu, however, then it prompts me to use the BitLocker’s recovery key
Cancelling the recovery and using F12 -> Windows Boot Manager trick did not prompt to use recovery key again…
Both, when installing Ubuntu, and sometimes when rebooting installed Ubuntu, “poweroff” or “reboot” results in prolonged shutdown with locked up display.
Fixed with dell-xps-9570-ubuntu-respin tweak script
About BIOS upgrade:
An upgrade from 1.3.0 to 1.3.1 required to swap RAM sticks to the original ones again, because the machine would not boot with white & amber LED flashes (supposedly indicating “memory problem”). After booting just once with the original sticks, I’ve swapped to the 2x16GB sticks again without a problem.
Changelog
[2019.10.24]
Added link to an article related to enabling Yaru-dark in GNOME Shell (eg. notification center background adjustment, which by default is white).
By: M Dziekon
Category: Linux |
Comments Off on Linux: Dual boot Windows/Ubuntu with secure LVM
Gentoo Cheat Sheet
Jump to:navigationJump to:search
This is a reference card of useful commands and tips for administrating Gentoo systems. Newcomers and grey beards alike are encouraged to add their helpful tips below.
Contents
Package management
Sync methods
Important
It is important to read and follow any and all news items that may be listed after performing a repository sync. See detailed instructions about upgrades.
Portage
Sync all repositories that are set to auto-sync including the Gentoo ebuild repository:
root #emaint --auto sync
Or, for short:
root #emaint -a sync
Sync the Gentoo ebuild repository using the mirrors by obtaining a snapshot that is (at most) a day old:
root #emerge-webrsync
emerge --sync now runs the emaint sync module with the --auto option. See Portage's sync operation.
eix
Sync custom package repository and the Gentoo ebuild repository using eix:
root #eix-sync
app-portage/eix can be installed by issuing:
root #emerge -a app-portage/eix
Gather more information on eix by reading its manual:
user $man eix
layman
Warning
Eselect/Repository supersedes layman
If there are overlays created by layman, to sync those overlays (layman does not manage overlays defined in /etc/portage/repos.conf):
root #layman -S
app-portage/layman can be installed by issuing:
root #emerge -a app-portage/layman
Package listings
qlist
List installed packages with version number and name of overlay used:
root #qlist -IRv
qlist is provided by app-portage/portage-utils.
eix
To view the list of packages in the world set, along with their available versions, it is possible to use eix:
root #eix --world | less
To keep color in the output, use the --color switch:
root #eix --color -c --world | less -R
Package installation
In the following examples the www-client/firefox package will be used, but users should replace it with the package they want to install.
List what packages would be installed, without installing them:
user $emerge --pretend --verbose www-client/firefox
Or, for short:
user $emerge -pv www-client/firefox
List what packages would be installed, ask for confirmation before installing them:
root #emerge --ask --verbose www-client/firefox
Install a specific version
Install a specific version of a package (use "\=" (backslash and equal sign) for shells that attach special meaning to the "=" character). This example will install the package immediately, without asking for confirmation; use with caution or add the --ask option:
root #emerge =www-client/firefox-24.8.0
Install without adding to the world file
Install a package without adding it to the world file:
root #emerge --ask --oneshot www-client/firefox
Or, for short:
root #emerge -a1 www-client/firefox
Package removal
Recommended method
The recommended way to remove a package is by using emerge --deselect. This removes the specified package from the @world set (i.e. says the package is no longer wanted). To clean up the system afterwards, run depclean as given below.
root #emerge --deselect www-client/firefox
Now run emerge --depclean. The --pretend option will have emerge display what actions would be taken, this must be reviewed to make sure no required packages would be removed:
user $emerge --pretend --depclean
If emerge --depclean has not been run in a while, it may try to remove many packages - caution is advised. Once it has been assured that emerge depclean will only remove unneeded packages, run (--ask option is not needed after a check via --pretend, but is included here to help avoid "copy paste" mishaps):
root #emerge --ask --depclean
Separately, to remove a package that no other packages depend on:
root #emerge --ask --verbose --depclean www-client/firefox
As a safety measure, depclean will not remove any packages unless all required dependencies have been resolved. As a consequence of this, it often becomes necessary to run:
root #emerge --ask --verbose --update --newuse --deep @world
Use --changed-use in place of --newuse to avoid rebuilds when the only changes are USE flags added to or dropped from the repository. Use the --quiet flag for more succinct execution:
root #emerge --ask --quiet --update --changed-use --deep @world
Unclean removal (ignoring dependencies)
Warning
Use the --unmerge, or the shorthand equivalent -C, options with extreme caution, and only if necessary, and once properly informed of what this does. This will break the system, or other software, if used on some packages. The correct way to remove packages in Gentoo is usually with the --depclean option, as described above.
Remove a package even if it is required by other packages, or is a vital system package:
root #emerge --unmerge www-client/firefox
This may sometimes be useful to temporarily remove a hard block.
The -C switch is short for --unmerge.
Tip
Do not confuse the lower case -c switch, which is short for --depclean (and is safe), with the upper case -C switch witch risks damaging the system and should only be used when absolutely required.
Package upgrades
Upgrade all packages in the world set, their dependencies (--deep), and packages that have USE flag changes (avoiding unnecessary rebuilds when USE changes have no impact):
root #emerge --ask --verbose --update --deep --changed-use @world
The --newuse may be used in place of--changed-use to make sure that all package use flags reflect the current state of those in the Gentoo repository, though this will entail more rebuilds. The --with-bdeps=y can be used to update build time dependencies also.
Package troubleshooting
Check for and rebuild missing libraries (not normally needed):
root #revdep-rebuild -v
equery is part of app-portage/gentoolkit. You can obtain it by issuing this command:
root #emerge -a gentoolkit
Tell which installed package provides a command using equery:
user $equery b `which vim`
Tip
qfile can provide a faster alternative to equery, if needed.
Tell which (not) installed package provides a command using e-file:
user $e-file vim
Install e-file with:
root #emerge -a app-portage/pfl
Tell which packages depend on a specific package (cat/pkg in the example) using equery:
user $equery d www-client/firefox
Get information about a package using eix:
root #eix www-client/firefox
Warning
Do not unemerge sys-libs/glibc. It is needed by nearly every other package. If you inadvertedly remove it you may need a rescue stick/disk. You can fetch glibc after setting PORTAGE_BINHOST="http://packages.gentooexperimental.org/packages/amd64-stable/" in /etc/portage/make.conf.
Portage enhancements
Manage configuration changes after an emerge completes:
root #dispatch-conf
Or alternatively:
root #etc-update
After installations or updates
After updating perl-core packages:
root #perl-cleaner --all
or if previous didn't help:
root #perl-cleaner --reallyall -- -av
For haskell packages:
root #haskell-updater
USE flags
Obtain descriptions and usage of the USE flag X using euse:
user $euse -i X
Gather more information on euse by reading its manual page:
user $man euse
Show what packages have the mysql USE flag:
user $equery hasuse mysql
Show what packages are currently built with the mysql USE flag:
user $eix --installed-with-use mysql
Show what USE flags are available for a specific package:
user $equery uses <package-name>
Quickly add a required USE flag for a package install:
root #echo 'dev-util/cmake -qt5' >> /etc/portage/package.use
Important Portage files
/etc/portage - primary configuration directory for Portage.
/etc/portage/make.conf - Global settings (USE flags, compiler options).
/etc/portage/package.use - USE flags of individual packages. Can also be a folder containing multiple files.
/etc/portage/package.accept_keywords - Keyword individual packages; e.g. ~amd64, ~x86, or ∼arm.
/etc/portage/package.license - Accepted licenses
/var/lib/portage/world - List of explicitly installed package atoms.
/var/db/pkg - Contains information for every installed package a set of files about the installation.
Log management
genlop
genlop is a Portage log processor, also estimating build times when emerging packages.
Install genlop by issuing:
root #emerge -a app-portage/genlop
You can gather more information on app-portage/genlop by reading its manual page:
root #man genlop
View the last 10 emerges (installs):
root #genlop -l | tail -n 10
View how long emerging LibreOffice took:
root #genlop -t libreoffice
Estimate how long emerge -uND --with-bdeps=y @world will take:
root #emerge -pU @world | genlop --pretend
Watch the latest merging ebuild during system upgrades:
root #watch genlop -unc
Overlays
eselect repository
app-eselect/eselect-repository can be installed by issuing:
root #emerge -a app-eselect/eselect-repository
List all existing overlays:
user $eselect repository list
List all installed overlays:
user $eselect repository list -i
See also Eselect/Repository
Layman
Warning
Eselect/Repository supersedes layman
app-portage/layman can be installed by issuing:
root #emerge -a app-portage/layman
List all existing overlays:
user $layman -L
List all installed overlays (layman does not manage overlays defined in /etc/portage/repos.conf):
user $layman -l
See also Layman
Services
Obtain root shell (if the current user is listed in the sudoers list):
user $sudo -i
OpenRC
Start the ssh daemon in the default runlevel at boot:
root #rc-update add sshd default
Start the sshd service now:
root #rc-service sshd start
Check if the sshd service is running:
root #rc-service sshd status
systemd
Start the ssh daemon at boot:
root #systemctl enable sshd
Start the sshd service now:
root #systemctl start sshd
Check if the sshd service is running:
root #systemctl status sshd
Gentoo Monthly Newsletter (GMN)
Search packages in Portage by regular expressions:
root #emerge -s "%^python$"
Overlays vary from very small to very large in size. As a result they slow down the majority of Portage operations. That happens because overlays do not contain metadata caches. The cache is used to speed up searches and the building of dependency trees. A neat trick is to generate local metadata cache after syncing overlays.
root #emerge --regen
This trick also works in conjunction with eix. eix-update can use metadata cache generated by emerge --regen to speed up things. To enable this, add the following variable to /etc/eixrc/00-eixrc:
FILE /etc/eixrc/00-eixrc
OVERLAY_CACHE_METHOD="assign"
qcheck
Use qcheck to verify installed packages:
root #qcheck vim-core
qcheck comes with app-portage/portage-utils and can be installed by running this command:
root #emerge -a app-portage/portage-utils
Learn more about qcheck by reading its manual page:
user $man qcheck
Category: Linux |
Comments Off on Linux: Gentoo Cheat Sheet 2