root@core01:~# systemctl restart ypserv ypbind yppasswdd ## 启动服务 root@core01:/etc# systemctl enable ypserv ypbind yppasswdd ## 开启启动服务 Synchronizing state of ypserv.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable ypserv Synchronizing state of ypbind.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable ypbind Synchronizing state of yppasswdd.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable yppasswdd Created symlink /etc/systemd/system/multi-user.target.wants/ypserv.service → /lib/systemd/system/ypserv.service. Created symlink /etc/systemd/system/multi-user.target.wants/ypbind.service → /lib/systemd/system/ypbind.service. Created symlink /etc/systemd/system/multi-user.target.wants/yppasswdd.service → /lib/systemd/system/yppasswdd.service.
初始化服务端:
bash
root@core01:/etc# /usr/lib/yp/ypinit -m
At this point, we have to construct a list of the hosts which will run NIS servers. core01 is in the list of NIS server hosts. Please continue to add the names for the other hosts, one per line. When you are done with the list, type a <control D>. next host to add: core01 next host to add: <control D> The current list of NIS servers looks like this:
core01
Is this correct? [y/n: y] y We need a few minutes to build the databases... Building /var/yp/core01/ypservers... Running /var/yp/Makefile... gmake[1]: Entering directory '/var/yp/core01' Updating passwd.byname... Updating passwd.byuid... Updating group.byname... Updating group.bygid... Updating hosts.byname... Updating hosts.byaddr... Updating rpc.byname... Updating rpc.bynumber... Updating services.byname... Updating services.byservicename... Updating netid.byname... Updating protocols.bynumber... Updating protocols.byname... Updating netgroup... Updating netgroup.byhost... Updating netgroup.byuser... Updating shadow.byname... gmake[1]: Leaving directory '/var/yp/core01'
core01 has been set up as a NIS master server.
Now you can run ypinit -s core01 on all slave server
这里提示可以在客户端运行ypinit -s core01来建立通信。
测试:
bash
root@core01:/etc# yptest Test 1: domainname Configured domainname is "core01"
Test 2: ypbind Use Protocol V1: Used NIS server: 192.168.1.210 Use Protocol V2: Used NIS server: 192.168.1.210 Use Protocol V3: ypbind_nconf: nc_netid: udp nc_semantics: 1 nc_flag: 1 nc_protofmly: 'inet' nc_proto: 'udp' nc_device: '-' nc_nlookups: 0 ypbind_svcaddr: 192.168.1.210:1010 ypbind_servername: core01 ypbind_hi_vers: 2 ypbind_lo_vers: 2
Test 3: yp_match nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
Test 4: yp_first ncl ncl:x:1001:1001::/home/ncl:/bin/bash
Test 5: yp_next rli7 rli7:x:1000:1000:rli7:/home/rli7:/bin/bash nobody nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
Test 9: yp_all ncl ncl:x:1001:1001::/home/ncl:/bin/bash rli7 rli7:x:1000:1000:rli7:/home/rli7:/bin/bash nobody nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin All tests passed
可以看出,服务端已经通过了全部的测试,并且有三条用户信息被载入到数据库。现在,就可以设置客户端了。
NIS 客户端
这里使用core02这台机器作为客户端(salve):
bash
root@core02:~# sudo apt install nis root@core02:~# vim /etc/yp.conf # /etc/yp.conf ypserver core01 root@core02:~# vim /etc/defaults/nis # /etc/defaults/nis # Are we a NIS server and if so what kind (values: false, slave, master)? NISSERVER=false # Are we a NIS client? NISCLIENT=slave NISMASTER=core01
root@core02:~# domainname core01 root@core02:~# domainname core01 root@core02:~# vim /etc/defaultdomain core01
开启服务
bash
root@core02:~# systemctl restart ypserv ypbind yppasswdd root@core02:~# systemctl enable ypserv ypbind yppasswdd Synchronizing state of ypserv.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable ypserv Synchronizing state of ypbind.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable ypbind Synchronizing state of yppasswdd.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable yppasswdd Created symlink /etc/systemd/system/multi-user.target.wants/ypserv.service → /lib/systemd/system/ypserv.service. Created symlink /etc/systemd/system/multi-user.target.wants/ypbind.service → /lib/systemd/system/ypbind.service. Created symlink /etc/systemd/system/multi-user.target.wants/yppasswdd.service → /lib/systemd/system/yppasswdd.service.
建立通信:
bash
root@core02:~# /usr/lib/yp/ypinit -s core01 root@core02:~# yptest Test 1: domainname Configured domainname is "core01"
Test 2: ypbind Use Protocol V1: Used NIS server: 192.168.1.210 Use Protocol V2: Used NIS server: 192.168.1.210 Use Protocol V3: ypbind_nconf: nc_netid: udp nc_semantics: 1 nc_flag: 1 nc_protofmly: 'inet' nc_proto: 'udp' nc_device: '-' nc_nlookups: 0 ypbind_svcaddr: 192.168.1.210:1010 ypbind_servername: core01 ypbind_hi_vers: 2 ypbind_lo_vers: 2
Test 3: yp_match nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
Test 4: yp_first ncl ncl:x:1001:1001::/home/ncl:/bin/bash
Test 5: yp_next rli7 rli7:x:1000:1000:rli7:/home/rli7:/bin/bash nobody nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
Test 9: yp_all ncl ncl:x:1001:1001::/home/ncl:/bin/bash rli7 rli7:x:1000:1000:rli7:/home/rli7:/bin/bash nobody nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin All tests passed
可以看到,core01中的ncl等用户被同步过来了,切换用户:
bash
root@core02:~# su ncl ncl@core02:/root$ ls ls: cannot open directory '.': Permission denied ncl@core02:/root$ cd bash: cd: /home/ncl: No such file or directory
## 安装nfs-kernel-server root@core01:/etc# apt install nfs-kernel-server root@core01:/etc# vim /etc/exports # /etc/exports: the access control list for filesystems which may be exported
LDAP, or Lightweight Directory Access Protocol, is an open and cross-platform protocol for accessing and maintaining distributed directory information services over a network. LDAP directories are often used for centralized storage of information like user accounts, group memberships, and network configurations.
LDAP directories consist of entries organized in a hierarchical tree structure. Each entry represents an object, and each object has attributes with values. The entries can represent users, groups, devices, and other types of entities.
Setting up LDAP on Ubuntu Cluster
1. Install LDAP Server
On each node in your cluster, install the LDAP server. For OpenLDAP, you can use the following:
bash
sudo apt update sudo apt install slapd ldap-utils
2. Configure LDAP
Follow the configuration steps outlined below to configure the LDAP server. Pay attention to the organization name, domain name, and other parameters.
3. Cluster Considerations
If your cluster nodes need to share the LDAP data, you might need to set up replication or synchronization between LDAP servers on different nodes. OpenLDAP supports replication to keep data consistent across multiple servers. Consult the OpenLDAP documentation for details on setting up replication.
4. Client Configuration
On each node or client machine in your cluster, install LDAP client utilities:
bash
Copy code sudo apt install ldap-utils Configure the client to connect to the LDAP server(s) in your cluster. Edit /etc/ldap/ldap.conf and /etc/nsswitch.conf to specify LDAP as a sourcefor user information.
5. Test Configuration
Use tools like ldapsearch to test the LDAP configuration on each node and ensure that LDAP clients can query the directory.
Please note that the specifics of setting up LDAP in a cluster may depend on the cluster type (e.g., Kubernetes, Hadoop, etc.) and your specific requirements. Always refer to the documentation of the LDAP server software you are using and any cluster management tools you have in place. Additionally, consider security aspects, such as encryption (SSL/TLS), and access control policies for your LDAP deployment.