每人一台物理机,一台虚拟机,题在虚拟机server0里面完成
考试时间4小时
要配置YUM源,按照页面提示配置
注意REPO文件的格式,如果不提供样本,需要自己写
防火墙,SELINUX,不用管
1. Identify hardwarecharacteristics
The file /root/dmicecode.out contains the output from the dmicecode utility that was run on another system.
Using this file identify the size of the L1 and L2 data caches, on the system on which this output was generated.
If the output contains information about multiple processors or multiple cores, you should base your answer on only a single processor or core.
Enter the values you find and click on ‘Submit Answers’ to record your answers.
L1 cache: ?? KiB
L2 cache: ?? KiB
解题过程
1.模拟dmidecode.out文件,分为两种情况,文本和二进制,考试不确定是哪种
dmidecode > dmidecode.out
B.二进制
dmidecode -u –dump-bin dmicecode.out
2.针对明文方式,使用vim
vim dmidecode.out
#先搜索Core Count,找到core为1的CPU
#再搜索L1/L2,根据搜索到的二进制串,继续搜索L1/L2缓存大小
#如果搜索到的L1缓存为0,则看第一个CPU的L1缓存
#如果使用的CPU是2个核心,则需要将L1缓存数量除以2
#L2不需要除以CPU数量,L2缓存共享
3.针对二进制方式,用dmidecode命令转成文本
dmidecode dmidecode.out > dmidecode.txt
然后用vim搜索
#可能的答案
#第一个文件,两个CPU,一个4核,一个1核,则L1为32KB,L2为32 KB,L2为8192 KB
#第二个文件,一个CPU,2核,则L1为 128/2=64 KB,L2为6144 KB
2. Analyze sar output
The file sar.data, in the root home directory, contains data recorded on another system for a period of time.
Use this file to answer the following questions.
Note: the file contains data gathered on a system that used logical volumes. You should ignore any data pertaining to logical volumes and only report values for physical devices.
What was the maximum process count on the system?最大进程数【选择框】
Which device had the highest write I/O rate? (device name can be entered either in sdX style or devX-XX style) 哪个设备的I/O速率最高【填空】
What was the maximum write I/O rate, in KiB/s, for this device?(rounded to the nearest hundred四舍五入) 此设备写速率的最高值【选择框】
When, in minutes, relative to the beginning of the monitoring period did the system have a burst of activity that resulted in the contents of memory being written to disk?以分钟为单位,找到突然将活跃内存数据写入磁盘的开始时间点【选择框】
解题过程
#安装 sysstat包,需要注意YUM源
rpm -aq sysstat
yum -y install sysstat
#此问题有2个要点,1是sar的语法,2是sort的语法
#第1问,找到最大进程数,用sar的 -w参数
#使用sort,按第2个字段排序,r为逆序
LANG=C sar -f sar.data -w |sort -n -k2
#第2问,找到写速率最高的设备名
#用sar的-d -p参数,后面跟sort命令,指定【写速率】字段进行排序
LANG=C sar -f sar.data -d -p|sort -n -k5
#答案应该是sdb
#第3问,找到写速率的最高值
#同上,需要将扇区数量*512,得到字节,再除以1024,得到KiB
LANG=C sar -f sar.data -d -p|sort -k5
echo 182379.20*512/1024|bc
#找到比上述计算结果大一点,最匹配的值
#第4问,找到磁盘活动突然增高的第一个时间周期,数值应该是周期的结束点
#sar -f -b,按时间字段排序,默认值就是第一个字段
LANG=C sar -f sar.data -b |more
#找到数字突然增大的时间点,并往前找到该周期的结束时间点
#参考答案是 19:53
3. Identify system calls
Identify the most frequently called system call in the command:
/bin/dumpkeys
【填空】
解题过程
strace -fc -S calls /bin/dumpkeys
将排第一的 ioctl 填入答案
4. Limit memory allocation for multiple instances of an application
Configure station such that when the application /usr/local/bin/greedy is run, it fails with the message ‘unable to allocate memory’
but the application /usr/local/bin/checklimit displays the messages ‘Success: Address space limit is okay’
解题过程
/usr/local/bin/greedy
/usr/local/bin/checklimit
sysctl -a|grep overcommit
vim /etc/sysctl.conf
vm.overcommit_memory = 2
sysctl -p
sysctl -a|grep overcommit
/usr/local/bin/greedy
/usr/local/bin/checklimit
5. Configure shared memory
Configure station so that the amount of SYSV shared memory available to all applications is at least 1GiB but no more than 1.5 GiB.
This setting should persist across reboot.
解题过程
#查看PAGE_SIZE
getconf PAGE_SIZE
#确定大小,其中的 4096 是上述PAGE_SIZE的输出
echo 1.4*1024*1024*1024/4096|bc
#修改/etc/sysctl.conf文件
cp /etc/sysctl.conf /etc/sysctl.conf.bak
vim /etc/sysctl.conf
kernel.shmall = 367001
#立即生效
sysctl -p
#验证
ipcs -l
6. Configure process priority
On station in /usr/local/bin there is an application called realtime.
Configure the system so that this application starts up automatically at system boot with a static priority of 27 using round robin priority scheduling.
This program should run as a background job.
You can verify that realtime is running by viewing /var/log/messages.
解题过程
#有四个要求
#开机自启
#C优先级为27
#CPU调度策略为round robin
#后台启动
#查看帮助信息,搜索CPU,找到 CPUSchedulingPolicy 和 CPUSchedulingPriority
man systemd.exec
#检查是否有service文件
cd /usr/lib/systemd/system
ls realtime*
#没有则新建service文件
#创建service文件
cp httpd.service realtime.service
vim realtime.service
[Unit]
Description=realtime
[Service]
ExecStart=/usr/local/bin/realtime
CPUSchedulingPolicy=rr
CPUSchedulingPriority=27
#启动验证
service realtime restart
#检查
tail -200 /var/log/messages
ps -eo comm,stat,user,pid,Priority
7. Categorize memory utilization
Determine the amount of physical memory, in pages, used by the realtime application. If necessary, pick the value from the list that rounds up to or down to the closest value.
Click on ‘Submit Answers’ to record your answer.【选择框】
解题过程
#使用ps命令查看realtime内存大小,找RSS字段
ps -aux |grep realtime
#RSS字段值是KiB,题目要求是 pages
getconf PAGE_SIZE
echo 548/4|bc
#把上述值四舍五入做选择,不能选小于的数字
#参考答案是 138
8. Configure system profiling support
On station there is a file in the root home directory called count_jiffies.stp.
Configure this system so that the root user can run this script.
解题过程
yum -y install systemtap kernel-devel kernel-debuginfo
stap /root/count_jiffies.stp
9. Choose correct application based on cache usage
There are two versions of an application in the home directory for the root account: cache-a and cache-b.
Both versions of the application do the same thing but on of the applications exhibits improper use of cache memory.
Determine which program has the better cache performance and copy this program into /usr/local/bin.
You may choose which utility, or utilities to use to make this determination.
解题过程
#检查是否安装 valgrind
which valgrind
#如果没有则安装
yum -y install valgrind
#使用 valgrind 检查缓存命中率
valgrind –tool=cachegrind /root/cache-a
valgrind –tool=cachegrind /root/cache-b
#哪个程序缓存命令率高,将其复制到/usr/local/bin
cp /root/cache-X /usr/local/bin
#参考答案 B 好
10. Configure network buffers
Configure the network buffers for this system so that each UDP connection, both incoming and outgoing, is guaranteed a minimum of 128 KiB of buffer and a maximum of 192 KiB of buffer space.
解题过程
#使用bc算出大小
echo 128*1024|bc
echo 192*1024|bc
#修改/etc/sysctl.conf文件
cp /etc/sysctl.conf /etc/sysctl.conf.bak
vim /etc/sysctl.conf
net.core.rmem_default = 131072
net.core.rmem_max = 196608
net.core.wmem_default = 131072
net.core.wmem_max = 196608
#立即生效
sysctl -p
11. Configure sar
Configure the sar data collection scripts on station to run at 3 minute intervals.
解题过程
cp /etc/cron.d/sysstat /etc/cron.d/sysstat.bak
vim /etc/cron.d/sysstat
*/3 * * * * root /usr/lib64/sa/sa1 1 1
12. Configure swap space
This system should have a total of 2048 MB of swap sapce.
Configure enough swap to make the case subject to the following requirements:
Do not delete any existing swap volumes.
The additional swap space should be split equally among two new partitions.
The new swap partitions should be mounted at system boot time.
Upon boot up, the kernel should use the new swap partitions before using the existing swap volume.
解题过程
#查看当前SWAP,计算需要增加多少SWAP
free -m
swapon -s
#查看硬盘设备
lsblk
#计算两个新分区大小,2048-当前SWAP,再除以2
#做分区
partprobe
#配置SWAP
mkswap /dev/vdb1
mkswap /dev/vdb2
swapon -p 100 /dev/vdb1
swapon -p 100 /dev/vdb2
#查看SWAP
free -m
swapon -s
#写入/etc/fstab,指定优先级
cp /etc/fstab /etc/fstab.bak
vim /etc/fstab
/dev/vdb1 swap swap default,pri=100 0 0
/dev/vdb2 swap swap default,pri=100 0 0
#挂载swap分区
mount -a
13. Configure network buffers for latency
Most of the network connections between this system and other systems will be via a low earth orbit satelite link.
The latency of this connection is approximately 500 milliseconds and the bandwidth of the link is 1.5 Mib/s(mebibits/second).
Tune the system so that for all TCP connections:
The minimum amount of memory available for buffering per connection is large enough for this latency and bandwidth.
The default amount of memory available for buffering per connection is equal to the minimum amount of memory for buffering.
The maximum amount of memory available for buffering perl connection is equal to 1.5 times the minimum amount of memory available for buffering.
解题过程
cp /etc/sysctl.conf /etc/sysctl.conf.bak
sysctl -a|grep ipv4.tcp
#找到以下两个值
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 6291456
#计算方式
echo 1.5*1024*1024/8*0.5|bc
#echo 1.5*0.5*1024*1024/8|bc
#echo 1.5*0.5/8*1024*1024|bc
vim /etc/sysctl.conf
net.core.rmem_max = 147456
net.core.wmem_max = 147456
net.ipv4.tcp_rmem = 98304 98304 147456
net.ipv4.tcp_wmem = 98304 98304 147456
#立即生效
sysctl -p
14. Configure application for memhog
There are two filesin the /usr/local/bin directory: memapp1 and mmemapp2.
The user memhog should be able to run memapp1 but should be prevented from running memapp2 or any other application which exhibits the same characteristics as memapp2.
解题过程
#取7500个page,换算成KB,乘以 4
vim /etc/security/limits.conf
memhog hard as 30000
memhog soft as 30000
#验证
useradd -d /home/memhog -m memhog
echo “memhog”|passwd –stdin memhog
su – memhog
/usr/local/bin/memapp1
/usr/local/bin/memapp2
15. Configure disk flushing
Configure the system so that modified data can remain in memory for up to 45 seconds before being considered for flushing to disk.
解题过程
#考试时45秒的值可能会变
sysctl -a|grep dirty
vim /etc/sysctl.conf
vm.dirty_expire_centisecs = 4500
#立即生效
sysctl -p
16. Analyze application performance
There is a tarball in your root home directory, called application.tgz that contains a client-server application.
Both the client and server are designed to run on the same system.
The client application is designed as a test utility for the server. The client accepts input from the keyboard and sends it to the server. Typing a Ctrl-D will terminate the client.
The server accepts input from the client and logs it to the file /tmp/application.out.
Once it is put into production, the server application will be expected to run for long periods without a restart and, during the course of an average day, will be serving out hundreds of connection requests.The expected amount of data to be handled by the server will average around 50 KiB per second. The server will be deployed on a system which has sufficient disk space to handle this data rate and log files will rotated daily to ensure that disk utilization does not become a concern.
Analyze the server application and identify, if any, problematic behavior that it exhibits. Record your answer below.
The application exhibits symptoms of(a):【选择框】
Click the ‘Submit Answers’ button to save your answers.
解题过程
valgrind –tool=memcheck ./server
valgrind –leak-check=full ./server
valgrind –tool=cachegrind ./server
valgrind –tool=callgrind ./server
#然后启动clinet,输入字符,ctrl+D提交
#一分钟后,中断SERVER进程,看结果
#将正确的程序复制到/usr/local/bin目录下
#记录server进程的错误
more /tmp/application.out
#参考答案,CPU缓存泄露 (占比重大) 内存泄露(轻微)
#参考答案,内存泄露/CPU缓存泄露(比重小)
17. Configure a module
This system will be receiving a hardware upgrade in the form of a SCSI controller and a SCSI tape backup unit.
Make sure that the SCSI tape controller module st.ko loads with a default buffer size of 24 KiB.
You should verify that when the module is loaded the kernel reports the correct buffer size in the kernel ring buffer.
解题过程
#考试时注意是修改成多少 K
#查看
modinfo -p st
cat /sys/bus/scsi/drivers/st/fixed_buffer_size
vim /etc/sysconfig/modules/st.modules
/usr/sbin/modprobe st
chmod +x /etc/sysconfig/modules/st.modules
vim /etc/modprobe.d/myprobe.conf
options st buffer_kbs=24
echo “modprobe st” >> /etc/rc.d/rc.local
#重新加载模块并确认
modprobe -r st
modprobe st
cat /sys/bus/scsi/drivers/st/fixed_buffer_size
18. Configure tuned
You want the majority of the disk I/O that is performed by the primary application running on your virtual system to have to wait no longer than a guaranteed amount of time before the I/O request is serviced.
Create an automatic tunning profile named performance that sets the default I/O scheduler for the drive that contains the root filesystem for your virtual system to the appropriate I/O scheduler.
For the purpose of the exam you may consider the drive to be an actual physical device.
This profile should be enabled and the default profile when the system boots.
解题过程
yum -y install tuned
systemctl enable tuned
systemctl start tuned
#cat /sys/block/vda/queue/scheduler
cat /sys/block/sda/queue/scheduler
tuned-adm list
tuned-adm active
cd /etc/tuned
mkdir performance
cd performance
cp /lib/tuned/throughput-performance/tuned.conf ./
vim tuned.conf
#删除所有其他内容,只留下disk
[disk]
device=vda
elevator=deadline
tuned-adm list
tuned-adm profile rh442
tuned-adm list
19. Configure cgroup
Configure a cgroup slice named power on your system according to the following requirements:
The slice should be limited to 1024 relative CPU shares and 1024M amount of memory.
Make sure that the httpd service runs under the iron slice automatically at boot time.
Do not use the flag CPUAccounting=yes in the slice unit file.
解题过程
#考试时注意CPU和内存两个值,可能会变
yum -y install httpd
systemctl enable httpd
systemctl restart httpd
systemctl status httpd
man systemd.resource-control
cd /etc/systemd/system
vim power.slice
[Unit]
Description=power Slice
[Slice]
CPUShares=1024
MemoryLimit=1024M
mkdir httpd.service.d
cd httpd.service.d
vim /etc/systemd/system/httpd.service.d/10-slice.conf
[Service]
Slice=power.slice
systemctl daemon-reload
systemctl restart httpd
systemctl status httpd
20. Analyze pre-recorded activity.
The file /root/.pcp/pmlogger/20141231.06.00.01.folio contains approximately one hour of pre-recorded system activity from a production server.
At around 40 minutes and 58 seconds in the recording, there is a sudden burst of I/O activity.
Analye the collected data and answer the following questions.
Which device is a network attached storage device?
【选择框】
What is the direction of the I/O activity?
【选择框】
Which device has the highest average write I/O throughput?
【选择框】
Click the ‘Submit Answers’ button to save your answers.
解题过程
yum -y install pcp pcp-gui pcp-doc
systemctl restart pmcd
systemctl enable pmcd
/sbin/chkconfig pmcd on
systemctl start pmlogger.service
pminfo | grep disk
systemctl restart pcp
systemctl enable pcp
/sbin/chkconfig pcp on
pminfo |grep dev
pmval -a 20141231.06.00.01.folio disk.dev.total_bytes
pmval -a 20181019.21.59.0 disk.dev.total_bytes
pmval -a 20181019.21.59.0 disk.dev.read_bytes
pmval -a 20181019.21.59.0 disk.dev.write_bytes
pmval -a 20181019.21.59.0 disk.dev.write_bytes -S ‘@22:08:00′ -T ‘@22:10:00′
21. Minimize page table size and TLB flushes
There are two versions of a program in /root that both allocate a 64 MiB segment of shared memory.
Configure your system so that it can run one of the other of these programs according to the following requirements:
The amount of memory consumed by the page tables for the process running the program is minimized
Assuming this system is primarily running only this program, TLB flushes are minimized when the program is running
The configure changes you mae persis across reoots
The program hugepage.shm uses SYSV shared memory to allocate the 64 MiB segment of memory.
The program hugepage.fs uses a pseudo filesystem to handle the memory allocation.
You only need to configure your system to support one or the other program.
You do not need to configure your system to support both.
You should copy only the program you decide to use into the /usr/local/bin directory.
If you choose to support the hugepage.fs program, the pseudo filesystem should be mounted to the directory /bigpages. This filesystem should be automatically mounted after a system reboot. When running the hugepage.fs program, you will be prompted for the name of a file under the mountpoint for your pseudo filesystem, you should use the file /bigpages/memory.
Both programs will pause after allocating the shared memory and prompt you to continue when you are ready. This is so that you may examine the system while the program is running to confirm that memory is being properly allocated.
解题过程
#使用 hugepage.shm
sysctl -a|grep huge
vim /etc/sysctl.conf
vm.nr_hugepages = 32
sysctl -p
cp hugepage.shm /usr/local/bin
#验证
/usr/local/bin/hugepage.shm
ipcs -lm
ipcs -ls