新换了台电脑,自带的系统为Win11,开始用的还挺顺利的。第二天突然有线网络工作不太正常,打开本地路由器网页都会卡死,但Ping大包都是正常的,板载的有线网卡是Realtek的,但自带的无线网卡是正常的。网上搜了一下,没有什么有用的解决办法。然后回退驱动各种实验,都没有解决。最后只能试一下更改网卡属性,把“接收端调整”关闭后,居然正常了。



新换了台电脑,自带的系统为Win11,开始用的还挺顺利的。第二天突然有线网络工作不太正常,打开本地路由器网页都会卡死,但Ping大包都是正常的,板载的有线网卡是Realtek的,但自带的无线网卡是正常的。网上搜了一下,没有什么有用的解决办法。然后回退驱动各种实验,都没有解决。最后只能试一下更改网卡属性,把“接收端调整”关闭后,居然正常了。
搞了台二手99新的GE150综合效果器,接耳机时接触不良,拆机看一下究竟。发现焊点其实不太好,在用烙铁点焊点时助焊剂还在冒泡,应该是过炉时温度偏低或时间不够。但发现主板上还有维修贴,而且核心板上助焊剂还一坨坨的,这工艺,还是走点心吧。
这个主要是一个Freescale的主芯片来处理的,还有一个GD的单片机。其实电路就不评价了,毕竟对于音频设备没做过。
焊好,试一下没问题,装回。
自己搞了一台Laney Mini STB Iron,这个小体积的箱子声音真是不错的。
本来是用电池试的,但一接上220V电源,插上吉他,这噪音颠覆认知了(无限暴躁…)。于是拆开来看看有没有解决办法。拆开发现里面空间还挺大的。
用万用表测了一下,应该是电源输入与电池的地连在一起,然后,电池到电源的正极之间连了一个二极管。在不接电源,只安6节电池时,电源接口上可以测到9V电压。
想了一下,本来这箱子就是便携的,功率也不太大,干脆加一个充电电池上去。正好手上有这前用来外场测试的12V锂电池包,试了一下大小正好卡在背板凹槽的下面。而且,使用12V锂电池的最高充电电压为12.6V,外接的电源不会超过此电压,同时也满足电池的充电要求,但不会完全充满。6节干电池的电压为9V多点,使用时,也不会浪费电量给锂电充多少电。说的,说干就干。
在背面正好有两个电源输入的焊点,一个是DC座的针,另一个是DC座另一侧的一个插针脚。焊好线,再打上胶。
将两个电源线连到买电池包时送的DC座。DC座口的金属周围也打上胶,防止在内部碰到什么地方导致短路。我还在下面胶了一个小塑料块,防止电池包乱动,撞到右侧的电容。
插上电池包的电源。
把电池包的电源线卡在下面,正好防止电池在里面乱串。
装上外壳,完美。
OpenWiFi是个不错的东西(除了许可选择比较激进外^@^),还是要十分感谢原作者开源这样一个项目让大家学习的。为了学习方便,我搞个最便宜的开发板AntSDR AD9363版本先测一下。本篇记录一下在Debian中的OpenWiFi开发环境搭建过程(作为CentOS的忠实用户,要全面转到Debian下了)。
本次用的服务器其实是我之前搭建TensorFlow的一个环境,使用的是debian-10.11.0版本。安装Debian10可以参考另一篇文章的配置Debian部分: Debian 10安装TensorFlow | OopsDump 。我看了下其他人的文章,有用Windows和Linux结合的,我感觉有点麻烦,就选择All in Linux了。反正Vivado和Matlab都有Linux版本,完全不影响使用。
安装必要的程序:
# apt-get install vim build-essential python3.7-dev python3-pip git
# apt-get install libncurses5-dev device-tree-compiler libssl-dev
在用户Home下创建一个工作目录:(以后工程都放在该目录下)
$ mkdir ~/workspace
$ cd ~/workspace
提示,命令中以#开始表示以root用户执行,以$开始表示以普通用户执行。
先安装个依赖:(不然安装会卡在结束位置)
# apt-get install libtinfo5
下载Xilinx Vivado 2018.3。然后解压文件:
$ tar zxvf Xilinx_Vivado_SDK_2018.3_1207_2324.tar.gz
然后,使用图形界面运行Vivado安装文件:
$ su -
# ./xsetup
如果图形界面无法显示,是加入用户的x11认证到root:
# xauth add $(xauth -f ~<username>/.Xauthority list|tail -1)
# xauth list
# export DISPLAY=:<n为上面显示的最后一列的:后值>
# ./xsetup
在安装时,注意以下几个选项:
1. 在安装Edition选择时,HL Design和HL System都可以,因为涉及到WebPack License问题,所以我选了HL Design。
2. 支持芯片选能用到的就行,可以减少大小。
3. 安装位置,建议改为/opt/Xilinx。
安装之后,会弹出许可导入界面,我们使用XC7Z020用WebPack不需要许可,可以直接关闭。
然后试一下打开Vivado:
$ . /opt/Xilinx/SDK/2018.3/settings64.sh
$ vivado
****** Vivado v2018.3 (64-bit)
**** SW Build 2405991 on Thu Dec 6 23:36:41 MST 2018
**** IP Build 2404404 on Fri Dec 7 01:43:56 MST 2018
** Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
start_gui
打开DocNAV会遇到libpng12找不到的问题,可以手动下载并安装这个:
https://blog.lazy-evaluation.net/data/libpng12-0_1.2.50-2+deb8u3dzu1_amd64.deb
# dpkg -i libpng12-0_1.2.50-2+deb8u3dzu1_amd64.deb
# apt-get install libcanberra-gtk-module
将默认文本编辑器改为VSCode(关于VSCode的安装,参考上一篇Debian中安装环境的文章)。进入Vivado后,通过菜单Tools->Settings打开配置。在配置中,他选择“Text Editor”页面,在Current Editor中选择最后的“Custom Editor”,配置参数为:
code -g [file name]:[line number]
安装Jtag驱动程序:
# cd /opt/Xilinx/Vivado/2018.3/data/xicom/cable_drivers/lin64/install_script/install_drivers
# ./install_drivers
提示:本章中的内容为非必要操作,可以选择有用的安装。
串口设备默认需要Root权限打开,为了方便使用USB转串口,更改默认权限:
# vi /etc/udev/rules.d/70-ttyusb.rules
(加入下面信息)
KERNEL=="ttyUSB[0-9]*", MODE="0666"
免费版ModelSim for Intel FPGA Starter Edition可以10000行的Verilog,验证一些小的模块够用了。由于ModelSim 10.6d为32位程序,需要安装一下必要的环境:(为了安装速度快,也可以下载32位的iso系统,并加入源列表)
# dpkg --add-architecture i386
# apt-get update
# apt-get install lib32z1 libx11-6:i386 libxext6:i386 libxft2:i386
在Intel FPGA的网站下载18.1版本的ModelSim(对应10.6d版本): Intel FPGAs 下载中心。
使用Root用户安装时,选择Starter版本,建议安装到/opt目录。(注意:Root用户安装前要先修改DISPLAY环境变量,参考之前Vivado安装)
安装完成后,进入modelsim安装目录下的 /opt/intelFPGA_pro/18.1/modelsim_ase/bin 运行./vsim 出现 Error: Cannot find ”./../linux_rh60/vsim”。需要修改 /opt/intelFPGA_pro/18.1/modelsim_ase/vco文件:
case $utype in
2.4.[7-9]*) vco="linux" ;;
2.4.[1-9][0-9]*) vco="linux" ;;
2.[5-9]*) vco="linux" ;;
2.[1-9][0-9]*) vco="linux" ;;
3.[0-9]*) vco="linux" ;;
*) vco="linux_rh60" ;;
中linux_rh60改为linux。
在一些应用调用时会出现错误,需要再创建一个软链:
# cd /opt/intelFPGA_pro/18.1/modelsim_ase
# ln -s linuxaloem/ linuxpe
联合仿真库编译:
以Root权限打开Vivado,菜单中选择Tools -> Compile Simulation Libraries…。按下面图片填入后,点Compile。
编译完成后,确认errors为0即可。
在菜单中选择Tools -> Settings…,再选择3rd Party Simulators页,填入下面信息:
对于工程,需要配置使用的仿真工具:
这里再安装一下Verilog开源工具环境:
# apt-get install iverilog gtkwave
由于众所周知的原因,使用Git clone GitHUB的代码比较慢,可以搭个梯子,并配置socks5的代理:(改为你实际的代理端口,也可以手动更改~/.gitconfig文件)
$ git config --global http.proxy 'scoks5://127.0.0.1:1080'
$ git config --global https.proxy 'scoks5://127.0.0.1:1080'
创建目录~/workspace/openwifi,将openwifi的代码都放在此目录中。
$ mkdir ~/workspace/openwifi
使用Git clone出最新SW工程:git clone https://github.com/open-sdr/openwifi.git。
Clone完后, 进入目录再更新Linux的submodule:git submodule update –init –recursive。
再使用Git clone出最新HW工程:git clone https://github.com/open-sdr/openwifi-hw.git。
Clone完后,进入目录再更新ADI HDL和OpenOFDM的submodule:git submodule update –init –recursive。
下载ADI提供的镜像(后续OpenWiFi版本可能有变化,以OpenWiFi最新的Readme.md为准):https://swdownloads.analog.com/cse/2019_R1-2020_02_04.img.xz
$ cd ~/workspace/openwifi
$ git clone --recursive https://github.com/open-sdr/openwifi.git
$ git clone --recursive https://github.com/open-sdr/openwifi-hw.git
$ wget https://swdownloads.analog.com/cse/2019_R1-2020_02_04.img.xz
$ xz -d 2019_R1-2020_02_04.img.xz
$ cd openwifi/adi-linux
$ git pull origin 2019_R1:2019_R1
$ cd ../adi-linux-64
$ git pull origin 2019_R1:2019_R1
先介绍一下Vivado安装后,和编译程序相关的环境。这里的环境没有使用PetaLinux,而是使用了原始的方式。
环境中,需要包含进必要的环境变量,可以使用:
$ source /opt/Xilinx/Vivado/2018.3/settings64.sh
先准备一个不小于8G的SDCard,这里AntSDR用的是TF卡。
先要卸载自动挂载的SDCard设备,这里使用的是(/dev/mmcblk0p1),可能与你的不同,替换为你系统中的设备(如果SDCard存在多个分区,需要分别)。然后,使用dd命令将img文件写入SDCard中。最后,使用sync命令进行同步后,方可拔出设备。
$ su -
# umount /dev/mmcblk0p1
# dd if=2019_R1-2020_02_04.img of=/dev/mmcblk0 bs=4194304
# sync
也可以将img文件直接挂载到系统中。先使用fdisk查看一下img的分区信息。
# fdisk -lu 2019_R1-2020_02_04.img
Disk 2019_R1-2020_02_04.img: 7.4 GiB, 7948206080 bytes, 15523840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00096174
Device Boot Start End Sectors Size Id Type
2019_R1-2020_02_04.img1 8192 2105343 2097152 1G b W95 FAT32
2019_R1-2020_02_04.img2 2105344 15501311 13395968 6.4G 83 Linux
2019_R1-2020_02_04.img3 4096 6143 2048 1M a2 unknown
Partition table entries are not in disk order.
为了同时编辑不同分区文件,这里使用losetup将img分割为不同的loop文件。
# losetup -P -f --show 2019_R1-2020_02_04.img
# losetup -l
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC
/dev/loop1 0 0 1 0 /opt/debian_dvd/firmware-10.11.0-amd64-DVD-1.iso 0 512
/dev/loop4 0 0 0 0 /home/oopsdump/workspace/openwifi/2019_R1-2020_02_04.img 0 512
/dev/loop2 0 0 1 0 /opt/debian_dvd/debian-10.11.0-amd64-DVD-3.iso 0 512
/dev/loop0 0 0 1 0 /opt/debian_dvd/debian-10.11.0-amd64-DVD-2.iso 0 512
/dev/loop3 0 0 1 0 /opt/debian_dvd/debian-10.11.0-amd64-DVD-1.iso 0 512
# ls /dev/loop4*
/dev/loop4 /dev/loop4p1 /dev/loop4p2 /dev/loop4p3
将需要的分区进行挂载:
# mkdir -p /mnt/adimg/BOOT /mnt/adimg/rootfs
# mount /dev/loop4p1 /mnt/adimg/BOOT
# mount /dev/loop4p2 /mnt/adimg/rootfs
显示目录:
# ls /mnt/adimg/BOOT
README.txt zynq-zc702-adv7511-ad9361-fmcomms2-3
socfpga_arria10_socdk_ad9172_fmc zynq-zc702-adv7511-ad9361-fmcomms5
socfpga_arria10_socdk_adrv9009 zynq-zc702-adv7511-ad9364-fmcomms4
socfpga_arria10_socdk_adrv9371 zynq-zc706-adv7511
socfpga_arria10_socdk_daq2 zynq-zc706-adv7511-ad6676-fmc
socfpga_arria10_socdk_fmclidar1 zynq-zc706-adv7511-ad9172-fmc-ebz
socfpga_cyclone5_sockit_arradio zynq-zc706-adv7511-ad9265-fmc-125ebz
uEnv.txt zynq-zc706-adv7511-ad9361-fmcomms2-3
VERSION zynq-zc706-adv7511-ad9361-fmcomms5
zynq-adrv9361-z7035-bob zynq-zc706-adv7511-ad9361-fmcomms5-ext-lo-adf5355
zynq-adrv9361-z7035-bob-cmos zynq-zc706-adv7511-ad9364-fmcomms4
zynq-adrv9361-z7035-fmc zynq-zc706-adv7511-ad9434-fmc-500ebz
zynq-adrv9361-z7035-packrf zynq-zc706-adv7511-ad9625-fmcadc2
zynq-adrv9364-z7020-bob zynq-zc706-adv7511-ad9625-fmcadc3
zynq-adrv9364-z7020-bob-cmos zynq-zc706-adv7511-ad9739a-fmc
zynq-adrv9364-z7020-packrf zynq-zc706-adv7511-adrv9008-1
zynq-common zynq-zc706-adv7511-adrv9008-2
zynqmp-adrv9009-zu11eg-revb-adrv2crr-fmc-revb zynq-zc706-adv7511-adrv9009
zynqmp-common zynq-zc706-adv7511-adrv9371
zynqmp-zcu102-rev10-ad9172-fmc-ebz-mode4 zynq-zc706-adv7511-adrv9375
zynqmp-zcu102-rev10-ad9361-fmcomms2-3 zynq-zc706-adv7511-fmcdaq2
zynqmp-zcu102-rev10-ad9361-fmcomms5 zynq-zc706-adv7511-fmcdaq3-revC
zynqmp-zcu102-rev10-ad9364-fmcomms4 zynq-zc706-adv7511-fmcjesdadc1
zynqmp-zcu102-rev10-adrv9008-1 zynq-zc706-adv7511-fmclidar1
zynqmp-zcu102-rev10-adrv9008-2 zynq-zc706-adv7511-fmcomms11
zynqmp-zcu102-rev10-adrv9009 zynq-zed-adv7511
zynqmp-zcu102-rev10-adrv9371 zynq-zed-adv7511-ad9361-fmcomms2-3
zynqmp-zcu102-rev10-adrv9375 zynq-zed-adv7511-ad9364-fmcomms4
zynqmp-zcu102-rev10-fmcdaq2 zynq-zed-adv7511-ad9467-fmc-250ebz
zynqmp-zcu102-rev10-fmcdaq3 zynq-zed-adv7511-cn0363
zynqmp-zcu102-rev10-fmclidar1 zynq-zed-imageon
zynq-zc702-adv7511
# ls /mnt/adimg/rootfs
bin boot dev etc home lib lost+found media mnt opt proc root run sbin share srv sys tmp usr var
对于挂载SDCard的方式,改为实际目录。
使用OpenWiFi的Readme中提供的方法编译:
$ export SDCARD_DIR=/mnt/adimg/
$ export XILINX_DIR=/opt/Xilinx/
$ export OPENWIFI_HW_DIR=/home/oopsdump/workspace/openwifi/openwifi-hw/
$ export BOARD_NAME=antsdr
$ cd openwifi/user_space
$ ./update_sdcard.sh $OPENWIFI_HW_DIR $XILINX_DIR $BOARD_NAME $SDCARD_DIR
更新的文件主要在:rootfs/lib/modules/,rootfs/root/openwifi/,BOOT/BOOT.BIN,BOOT/devicetree.dtb,BOOT/openwifi/
【TODO】
获取Xilinx Viterbi Decoder的Evaluation License:
【TODO】
本文参考了:
https://blog.lazy-evaluation.net/posts/linux/vivado-2018-3-buster.html
https://blog.csdn.net/long_fly/article/details/78643258
由于CentOS8即将停止维护,所以将系统更换为了Debian,并将TensorFlow在Debian 10上重新安装一下。这里使用的是 Debian 10.11.0版本,机器显卡使用的是1080Ti。如果在CentOS8中安装,可以参考我之前写的步骤: CentOS8安装TensorFlow | OopsDump
先安装Debian 10.11.0,就不按步骤具体说明了。主要注意几点:
1. 安装时,最好选择英文。
2. 分区时,要把默认的系统分区分大一些,或者分出一个 分区 用于挂载/opt目录。
3. 建议安装Gnome桌面环境。
下面的操作都要使用root权限。有一点需要提醒的是,Debian中切换到root用户,要使用“su -”而不能直接用“su”, 不然$PATH中不会包含sbin目录。
关闭自动待机,在界面程序中Settings->Power->Automatic suspend = off。
远程连接的情况,还可以完全禁止休眠和睡眠:(重新开启将mask改为unmask)
# systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
安装完成后,进入系统,配置软件源为本地DVD文件,因为这样可以最快安装,不联网的机器不用担心漏洞问题,也减少自动升级造成的麻烦。先新建个目录 /opt/debian_dvd/ ,将DVD1-3及firmware DVD保存到/opt/debian_dvd/目录下。再在/media/目录创建几个子目录用于开机挂CD,我用了/mdeia/debian_dvd1,/mdeia/debian_dvd2,/mdeia/debian_dvd3,/mdeia/debian_dvd4。然后修改/etc/fstab,增加开机挂CD文件。
(...)
/opt/debian_dvd/debian-10.11.0-amd64-DVD-1.iso /media/debian_dvd1 iso9660 defaults 0 0
/opt/debian_dvd/debian-10.11.0-amd64-DVD-2.iso /media/debian_dvd2 iso9660 defaults 0 0
/opt/debian_dvd/debian-10.11.0-amd64-DVD-3.iso /media/debian_dvd3 iso9660 defaults 0 0
/opt/debian_dvd/firmware-10.11.0-amd64-DVD-1.iso /media/debian_dvd4 iso9660 defaults 0 0
修改/etc/apt/sources.list文件,在其它非注释行首加#改为注释,增加指定DVD为软件源:
deb [trusted=yes] file:/media/debian_dvd1 buster contrib main
deb [trusted=yes] file:/media/debian_dvd2 buster contrib main
deb [trusted=yes] file:/media/debian_dvd3 buster contrib main
deb [trusted=yes] file:/media/debian_dvd4 buster contrib main non-free
deb http://deb.debian.org/debian/ buster main contrib
deb-src http://deb.debian.org/debian/ buster main contrib
如果不想重启机器,可以手动挂载:
# mount -o loop /opt/debian_dvd/debian-10.11.0-amd64-DVD-1.iso /media/debian_dvd1 (其他类似)
更新软件源信息:apt-get update
下面就可以安装一些常用的包了:
# apt-get install vim curl build-essential git python3.7-dev python3-pip linux-headers-$(uname -r) libglvnd-dev dkms pkg-config
将当前用户加入sudo:
# vim /etc/sudoers
(加入一行当前用户)
oopsdump ALL=(ALL) ALL
对于远程的童鞋,要关闭vim的鼠标支持,不能无法用putty上右键粘贴。修改/usr/share/vim/vim81/defaults.vim,将mouse=a该为mouse-=a。
安装XRDP远程桌面:
# apt-get install xrdp
# systemctl enable xrdp
如果远程登录后是桌面空白,可以尝试:
# apt-get purge xserver-xorg-legacy
在使用的用户下:(无需root)
# echo gnome-session > ~/.xsession
如果在远程登录后,遇到提示“Authentication is required to create a color profile”,需要输入管理员密码。可以增加下面文件: /etc/polkit-1/localauthority.conf.d/02-allow-colord-oopsdump-com.conf,内容如下:(注意:将下面的{group}替换为你需要允许的用户组, {group} 可以使用video组)
polkit.addRule(function(action, subject) {
if ((action.id == "org.freedesktop.color-manager.create-device" ||
action.id == "org.freedesktop.color-manager.create-profile" ||
action.id == "org.freedesktop.color-manager.delete-device" ||
action.id == "org.freedesktop.color-manager.delete-profile" ||
action.id == "org.freedesktop.color-manager.modify-device" ||
action.id == "org.freedesktop.color-manager.modify-profile") &&
subject.isInGroup("{group}")) {
return polkit.Result.YES;
}
});
要注意的是,如果用户在图形界面没有Logout,远程是无法登录的。可以用SSH执行下面命令,强制用户注销:(DISPLAY值根据实际情况修改,可以ps -ef|grep Xorg查看带“:”的值)
env DISPLAY=:0.0 gnome-session-quit --logout
配置中文环境支持:
# dpkg-reconfigure locales
**选上zh_CN.UTF-8
**默认语言仍为en_US.UTF-8
安装中文输入支持:
# apt-get install fcitx fcitx-googlepinyin
# fcitx-configtool
**在弹出界面点左下+号
**去掉Only Show Current Language的勾
**选中最下的Google Pinyin,然后OK
**然后就可以Ctrl + Space切换输入法
安装VSCode用来编码:
# curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
# install -o root -g root -m 644 microsoft.gpg /usr/share/keyrings/microsoft-archive-keyring.gpg
# sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/usr/share/keyrings/microsoft-archive-keyring.gpg] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'
# apt-get install apt-transport-https
# apt-get update
# apt-get install code
Nvidia的显卡驱动可以在官方网站下载:https://www.nvidia.com/Download/index.aspx?lang=en-us。选择GeForce类型, GeForce 10系列,GTX 1080Ti。
也可用我的下载地址直接下载:https://us.download.nvidia.cn/XFree86/Linux-x86_64/470.57.02/NVIDIA-Linux-x86_64-470.57.02.run
先关闭原开源的Nouveau 驱动:(原因可以参考早前写的在CentOS8下安装Nviadia驱动的过程。)
# vim /etc/modprobe.d/nouveau
文件中输入:
blacklist nouveau
options nouveau modeset=0
# depmod -ae
# vim /etc/default/grub
文件尾增加:
GRUB_CMDLINE_LINUX_DEFAULT="quiet rd.driver.blacklist=nouveau nouveau.modeset=0"
# update-grub2
# update-initramfs -u
关闭系统GUI,并重启:(使用GUI仍然会自动加载nouveau)
# systemctl set-default multi-user.target
# systemctl reboot
先停止Nouveau的驱动:rmmod nouveau
安装Nvidia的闭源驱动:./NVIDIA-Linux-x86_64-470.57.02.run
恢复图形界面并重启:
# systemctl set-default graphical.target
# systemctl reboot
我发现仍然会加载Nouveau的驱动,但Nvidia的驱动已经可以加载了。这时,可以验证一下:
# nvidia-smi
Thu Nov 1 09:08:34 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:05:00.0 On | N/A |
| 21% 38C P8 11W / 250W | 65MiB / 11176MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1040 G /usr/lib/xorg/Xorg 39MiB |
| 0 N/A N/A 1070 G /usr/bin/gnome-shell 22MiB |
+-----------------------------------------------------------------------------+
CUDA 11.4 安装按官网步骤提示即可。 也可用我下载地址: https://developer.download.nvidia.com/compute/cuda/11.4.2/local_installers/cuda_11.4.2_470.57.02_linux.run
安装CUDA 11.4:./cuda_11.4.2_470.57.02_linux.run
安装时,可以不再安装驱动:(因为上面安装的一样版本的驱动)
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x CUDA Installer x
x - [ ] Driver x
x [ ] 470.57.02 x
x + [X] CUDA Toolkit 11.4 x
x [X] CUDA Samples 11.4 x
x [X] CUDA Demo Suite 11.4 x
x [X] CUDA Documentation 11.4 x
x Options x
x Install x
x x
x x
x Up/Down: Move | Left/Right: Expand | 'Enter': Select | 'A': Advanced options x
mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj
验证是否安装成功:
# cd /usr/local/cuda/samples/1_Utilities/deviceQuery
# make
.....
# ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA GeForce GTX 1080 Ti"
CUDA Driver Version / Runtime Version 11.4 / 11.4
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 11176 MBytes (11719409664 bytes)
(028) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
GPU Max Clock rate: 1582 MHz (1.58 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 2883584 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 98304 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 5 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS
# make clean
...
需要手动下载Cudnn来安装: https://developer.nvidia.com/rdp/cudnn-download
下载Cudnn需要注册,如果不想注册,可以使用我下载用的地址: https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.4/11.4_20210831/cudnn-11.4-linux-x64-v8.2.4.15.tgz
安装方法:
# cd /usr/local/
# tar -xzvf <your save directory>cudnn-11.4-linux-x64-v8.2.4.15.tgz
# sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
安装virtualenv,来进行TensorFlow与CentOS本身包的隔离:
# pip3 install --upgrade virtualenv
创建一个公用虚拟环境:
# mkdir -p /var/venvs/
# virtualenv --system-site-packages /var/venvs/tensorflow
将下面的Diff信息修改到/var/venvs/tensorflow/bin/activate:
unset _OLD_VIRTUAL_PYTHONHOME
fi
+ if ! [ -z "${_OLD_VIRTUAL_LIB:+_}" ] ; then
+ LD_LIBRARY_PATH="$_OLD_VIRTUAL_LIBPATH_OPPSDUMP_COM"
+ export LD_LIBRARY_PATH
+ unset _OLD_VIRTUAL_LIBPATH_OPPSDUMP_COM
+ fi
# This should detect bash and zsh, which have a hash command that must
_OLD_VIRTUAL_PATH="$PATH"
- PATH="$VIRTUAL_ENV/bin:$PATH"
+ PATH="$VIRTUAL_ENV/bin:/usr/local/cuda/bin:$PATH"
export PATH
+ _OLD_VIRTUAL_LIBPATH_OPPSDUMP_COM="$LD_LIBRARY_PATH"
+ LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
+ export LD_LIBRARY_PATH
+ export CUDADIR=/usr/local/cuda
# unset PYTHONHOME if set
当需要使用TensorFlow时,先进行:
# source /var/venvs/tensorflow/bin/activate
安装TensorFlow:
**CPU版本
# pip3 install --upgrade tensorflow
**GPU版本
# pip3 install --upgrade tensorflow-gpu
**旧版本
# pip3 install tensorflow=={package_version}
**其他包PyTorch
# pip3 install --upgrade torch8
**如果安装tensorflow时出现h5py安装问题,可以使用下面方法:
# env H5PY_SETUP_REQUIRES=0 pip3 install -U --no-build-isolation h5py==3.1.0
测试命令:
$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
如果遇到问题:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
(...)
降低numpy版本:
# pip3 uninstall numpy
# pip3 install numpy==1.16.4
再次运行可以成功了。
在使用Linux的/dev/input/eventX接口获取键盘事件时,如果程序重启,之前的按键已经处于按下状态,将无法获取到更新事件(因为本来就无事件更新)。此时,需要在程序启动时,先判断按键的当前值,可使用ioctl的EVIOCGKEY来获取。
查了网上别人的示例,发现在MIPS上有问题。所以又看一下内核的代码,发现示例都写错了,仅可以在小端CPU上碰巧使用正确。获取值的空间不应该为uint8_t类型,而是unsigned long类型。所以,应该如下:
#include <stdio.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/input.h>
#include <string.h>
#define EVDEV "/dev/input/event2"
int main(int argc, char **argv) {
unsigned long key_states[KEY_MAX/32 + 1];
struct input_event evt;
int fd;
memset(key_states, 0, sizeof(key_states));
fd = open(EVDEV, O_RDWR);
ioctl(fd, EVIOCGKEY(sizeof(key_states)), key_states);
//Key value is: (key_states[key/32]>>(key%32)) & 1
//....
//Wrote by: oopsdump.com
}
上例参考了:https://stackoverflow.com/questions/27063833/linux-input-device-events-how-to-retrieve-initial-state。
以前一直使用VNC连接Linux服务器的远程桌面。突然想找找有没有更方便的办法,发现了xrdp这个工具,可以直接使用windows的远程进行连接。下面是安装步骤,及遇到问题的解决办法。转载的话,请注明来自:Oopsdump.com,谢谢。
首先,CentOS中需要安装epel库,如已安装过,可跳过该步:
yum install epel-release
然后,安装xrdp及需要的vnc服务端:
yum install xrdp tigervnc-server
最后,启动服务,并设置为开机启动:
systemctl start xrdp
systemctl enable xrdp
如开启了防火墙,还需要开放相应端口:
firewall-cmd --permanent --zone=public --add-port=3389/tcp
firewall-cmd --reload
下面是可能遇到的问题:
如果远程遇到颜色错误,配置xrdp.ini文件:将/etc/xrdp/xrdp.ini中的max_bpp=32,改为max_bpp=24。
如果在远程登录后,遇到提示“Authentication is required to create a color profile”,需要输入管理员密码。可以增加下面文件: /etc/polkit-1/rules.d/02-allow-colord-oopsdump-com.conf,内容如下:(注意:将下面的{group}替换为你需要允许的用户组)
polkit.addRule(function(action, subject) {
if ((action.id == "org.freedesktop.color-manager.create-device" ||
action.id == "org.freedesktop.color-manager.create-profile" ||
action.id == "org.freedesktop.color-manager.delete-device" ||
action.id == "org.freedesktop.color-manager.delete-profile" ||
action.id == "org.freedesktop.color-manager.modify-device" ||
action.id == "org.freedesktop.color-manager.modify-profile") &&
subject.isInGroup("{group}")) {
return polkit.Result.YES;
}
});
如果还有问题,可以在/etc/polkit-1/localauthority/50-local.d/下增加文件45-allow-colord-oopsdump-com.pkla:
[Allow Colord all Users]
Identity=unix-user:*
Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile
ResultAny=no
ResultInactive=no
ResultActive=yes
上述更改完成后,需要重启polkit服务,或重启系统。如:systemctl restart polkit。
如果xrdp无法启动,可以尝试:
chcon -t bin_t /usr/sbin/xrdp
chcon -t bin_t /usr/sbin/xrdp-sesman
如果需要禁用功能,可以将/etc/xrdp/xrdp.ini中相应的channel用#注释后,重启服务:
rdpdr=true 设备重定向开启。
rdpsnd=true 声音功能开启。
drdynvc=true 初始化附加的动态Channel开启。
cliprdr=true 剪切板重定向开启。
rail=true 远程应用集成到本地(RAIL)开启。
xrdpvr=true XRDP视频流功能开启。
tcutils=true 远程推出USB及关闭电源等功能开启。
在Icarus Verilog官网下载最新版本iverilog版本并安装:http://iverilog.icarus.com/。安装时,要选择加入PATH。
官网下载并安装VSCode。(转载的话,请注明来自:Oopsdump.com,谢谢。)
VSCode增加中文支持:Ctrl+Shift+X(对应菜单:View->Extensions),搜索chinese,然后安装对应包。
如果还未显示中文,使用Ctrl+Shift+P,搜索lang,选择Configure Display Language,再选择zh-cn。
VSCode中安装支持对应的verilog包:Ctrl+Shift+X(对应菜单:View->Extensions),搜索verilog,安装Verilog HDL/SystemVerilog。
上述包需要ctags支持,下载并安装windows对应的ctags:https://github.com/universal-ctags/ctags-win32。打开文件->首选项->设置,再选择扩展->verilog configuration。修改verilog.ctags.path为ctags的目录地址,如C:\Oopsdump_Com_Apps\ctags。修改verilog.linting.linter为iverilog。
可以自动生成testbench:Ctrl+Shift+X(对应菜单:View->Extensions),搜索testbench,安装 Verilog_Testbench。该功能是python的脚本,需要python环境,可安装python3,并且在安装时,选择加入PATH。然后打开Win+X运行cmd,运行命令:pip3.7 install chardet,安装必要的包。
使用Ctrl+Shift+P,搜testbench,就可以自动生成测试了。
使用Ctrl+Shift+P,搜索verilog,可以使用Rerun lint tool和Instantiate Module功能。
如果遇到:”Cannot read property ‘logger’ of undefined”,如果你的Verilog HDL/SystemVerilog版本是1.0.4,请退回的1.0.3版本,方法就是扩展管理器,该扩展的菜单里选“安装另一个版本…”。
iverilog的基本使用:
# iverilog -o test_module.vvp test.v
# vvp test_module.vvp
也可以使用下面批处理文件:
@echo off
rem 若没有设置路径到PATH则关闭以下注释
rem set iverilog_path=c:\iverilog\bin;
rem set gtkwave_path=c:\iverilog\gtkwave\bin;
rem set path=%iverilog_path%%gtkwave_path%%path%
set testbentch_module=hello_world_tb
set testbentch_file="./%testbentch_module%.v"
iverilog -o "%testbentch_module%.vvp" %testbentch_file%
vvp "%testbentch_module%.vvp"
set gtkw_file="%testbentch_module%.gtkw"
if exist %gtkw_file% (gtkwave %gtkw_file%) else (gtkwave "%testbentch_module%.vcd")
参考链接:
Icarus Verilog和GTKwave使用简析:https://blog.csdn.net/husipeng86/article/details/60469543
做了驱入式好多年,也要适当跟随一下潮流,搞了台1080ti机器来玩一下机器学习。 CentOS8中安装TensorFlow的文章不多,本文总结一下我的安装步骤。转载的话,请注明来自:Oopsdump.com,谢谢。
CentOS8中自带的python为3.6.8。如果没有安装python,可以通过下面命令进行安装:
yum -y install python36
安装其它需要的依赖:
sudo yum -y install gcc gcc-c++ python3-pip python36-devel atlas atlas-devel gcc-gfortran openssl-devel libffi-devel
Nvidia的显卡驱动可以在官方网站下载:
https://www.nvidia.com/Download/index.aspx?lang=en-us
也可用我的下载地址:http://us.download.nvidia.com/XFree86/Linux-x86_64/440.44/NVIDIA-Linux-x86_64-440.44.run
安装中可能出现的问题:
安装报错:you appear to be running an x server pleaseexit x before installing
解决方案:在命令行模式下用root账户关闭x server之后,重新安装驱动。
首先注销当前账户,在注销后的登录界面按ctrl+alt+f1进入纯命令行界面。
通过su命令进入root用户。
输入systemctl stop gdm.service命令即可关闭x server然后重新运行驱动,中间出现的warning可以忽略,在选择是否支持X server的选项时,需要更改NO到Yes,安装完成后输入reboot重启后进入图形界面。
安装报错:ERROR: The Nouveau kernel driver is currently in use by your system. This driver is incompatible with the NVIDIA driver……
解决方案:关闭原 Nouveau 驱动。
Nouveau是由第三方为NVIDIA显卡开发的一个开源3D驱动,我们需要开闭后才能加载新驱动。
打开/etc/modprobe.d/blacklist.conf 添加:
blacklist nouveau
打开 /usr/lib/modprobe.d/dist-blacklist.conf 添加两行:
blacklist nouveau
options nouveau modeset=0
重建文件系统备份原来的initramfs nouveau image镜像
mv/boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
dracut /boot/initramfs-$(uname -r).img $(uname -r)
安装dkms:
yum install kernel-devel
yum -y install epel-release
yum -y install dkms
安装libglvnd:
dnf groupinstall "Development Tools"
dnf install libglvnd-devel elfutils-libelf-devel
重启安装NVIDIA驱动./NVIDIA-Linux-x86_64-384.90-1080ti.run
(如果不显示,可以尝试Ctrl+Alt+F2或通过SSH登录)登录不显示,可以试nvidia-xconfig
验证是否安装成功:
# nvidia-smi
Wed Dec 4 04:35:16 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... Off | 00000000:02:00.0 On | N/A |
| 62% 66C P0 N/A / 95W | 4024MiB / 4039MiB | 88% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1470 G /usr/libexec/Xorg 39MiB |
| 0 1840 G /usr/bin/gnome-shell 42MiB |
| 0 6996 C .../tensorflow-gpu-1.15.0/venv/bin/python3 3925MiB |
+-----------------------------------------------------------------------------+
CUDA 10.2 安装按官网步骤提示即可。
也可用我下载地址:https://developer.download.nvidia.cn/compute/cuda/10.2/Prod/local_installers/cuda-repo-rhel8-10-2-local-10.2.89-440.33.01-1.0-1.x86_64.rpm
验证是否安装成功:
cd /usr/local/cuda/samples
make
cd 1_Utilities/
make
ls
./deviceQuery
deviceQuery deviceQuery.cpp deviceQuery.o Makefile NsightEclipse.xml readme.txt
(venv) [root@localhost deviceQuery]# ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1050 Ti"
CUDA Driver Version / Runtime Version 10.2 / 10.2
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4040 MBytes (4235919360 bytes)
( 6) Multiprocessors, (128) CUDA Cores/MP: 768 CUDA Cores
GPU Max Clock rate: 1493 MHz (1.49 GHz)
Memory Clock rate: 3504 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 1048576 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 2 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS
(venv) [root@localhost deviceQuery]# pwd
/usr/local/cuda/samples/1_Utilities/deviceQuery
(venv) [root@localhost deviceQuery]# ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
上面包内集成的显示驱动比较老,如果安装后显示无法启动,可以使用SSH再次安装一下显示驱动,并进行nvidia-xconfig。
需要手动下载Cudnn来安装:
https://developer.nvidia.com/rdp/cudnn-download
可能需要注册,如果不想注册,可以使用我下载用的地址:
https://developer.download.nvidia.cn/compute/redist/cudnn/v7.6.5/cudnn-10.2-linux-x64-v7.6.5.32.tgz
安装方法:
$ cd /usr/local/cuda/
$ tar -xzvf cudnn-10.2-linux-x64-v7.6.5.32.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
安装virtualenv,来进行TensorFlow与CentOS本身包的隔离:
pip3 install --upgrade virtualenv
创建一个公用虚拟环境:
mkdir -p /var/venvs/
virtualenv --system-site-packages /var/venvs/tensorflow
将下面的Diff信息修改到/var/venvs/tensorflow/bin/activate:
unset _OLD_VIRTUAL_PYTHONHOME
fi
+ if ! [ -z "${_OLD_VIRTUAL_LIB:+_}" ] ; then
+ LD_LIBRARY_PATH="$_OLD_VIRTUAL_LIBPATH_OPPSDUMP_COM"
+ export LD_LIBRARY_PATH
+ unset _OLD_VIRTUAL_LIBPATH_OPPSDUMP_COM
+ fi
# This should detect bash and zsh, which have a hash command that must
_OLD_VIRTUAL_PATH="$PATH"
- PATH="$VIRTUAL_ENV/bin:$PATH"
+ PATH="$VIRTUAL_ENV/bin:/usr/local/cuda/bin:$PATH"
+ export PATH
+ _OLD_VIRTUAL_LIBPATH_OPPSDUMP_COM="$LD_LIBRARY_PATH"
+ LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
+ export LD_LIBRARY_PATH
+ export CUDADIR=/usr/local/cuda
# unset PYTHONHOME if set
当需要使用TensorFlow时,先进行:
source /var/venvs/tensorflow/bin/activate
安装TensorFlow:
# CPU版本
pip install --upgrade tensorflow
# GPU版本
pip install --upgrade tensorflow-gpu
# 旧版本
pip install tensorflow=={package_version}
测试命令:
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
如果看到输出:
2019-12-04 09:47:43.083342: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-04 09:47:43.083399: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64:
2019-12-04 09:47:43.083440: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64:
2019-12-04 09:47:43.083478: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64:
2019-12-04 09:47:43.083516: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64:
2019-12-04 09:47:43.083553: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64:
2019-12-04 09:47:43.083593: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudnn.so.7'; dlerror: libcudnn.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64:
2019-12-04 09:47:43.083599: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
需要在/usr/local/cuda/lib64/中做软链接:
cd /usr/local/cuda/lib64/
ln -s libcudart.so.10.2 libcudart.so.10.0
ln -s libcufft.so.10.1.2.89 libcufft.so.10.0
ln -s libcurand.so.10.1.2.89 libcurand.so.10.0
ln -s libcusolver.so.10.3.0.89 libcusolver.so.10.0
ln -s libcusparse.so.10.3.1.89 libcusparse.so.10.0
还需要在 /usr/lib64目录中做一个软链接:
cd /usr/lib64
ln -s libcublas.so.10.2.2.89 libcublas.so.10.0
注意: 如果使用了seLinux,需要使用chcon -u system_u 更改软链接文件。
本文参考了以下文章,特此感谢:
https://blog.csdn.net/happyfreeangel/article/details/103392787