I have a VPC with a private and a public subnet – each containing an identically built RHEL7 server. I believe the VPC is set up correctly (see following). However, the public server can use yum and the private one can't. The private one receives the error…
$ yum search apache
Failed to set locale, defaulting to C
Loaded plugins: amazon-id, rhui-lb
Repo rhui-REGION-client-config-server-7 forced skip_if_unavailable=True due to: /etc/pki/rhui/cdn.redhat.com-chain.crt
Repo rhui-REGION-client-config-server-7 forced skip_if_unavailable=True due to: /etc/pki/rhui/product/rhui-client-config-server-7.crt
Repo rhui-REGION-client-config-server-7 forced skip_if_unavailable=True due to: /etc/pki/rhui/rhui-client-config-server-7.key
Repo rhui-REGION-rhel-server-releases forced skip_if_unavailable=True due to: /etc/pki/rhui/cdn.redhat.com-chain.crt
Repo rhui-REGION-rhel-server-releases forced skip_if_unavailable=True due to: /etc/pki/rhui/product/content-rhel7.crt
Repo rhui-REGION-rhel-server-releases forced skip_if_unavailable=True due to: /etc/pki/rhui/content-rhel7.key
Repo rhui-REGION-rhel-server-rh-common forced skip_if_unavailable=True due to: /etc/pki/rhui/cdn.redhat.com-chain.crt
Repo rhui-REGION-rhel-server-rh-common forced skip_if_unavailable=True due to: /etc/pki/rhui/product/content-rhel7.crt
Repo rhui-REGION-rhel-server-rh-common forced skip_if_unavailable=True due to: /etc/pki/rhui/content-rhel7.key
Could not contact CDS load balancer rhui2-cds01.us-east-1.aws.ce.redhat.com, trying others.
Could not contact any CDS load balancers: rhui2-cds01.us-east-1.aws.ce.redhat.com, rhui2-cds02.us-east-1.aws.ce.redhat.com.
Network
I have an AWS VPC using RHEL7 AMIs.
- I have one VPC 10.0.0.0/16
- I have one public subnet 10.0.0.0/24
- I have one private subnet 10.0.1.0/24
- I have an internet gateway
- I have a NAT
-
The main main route table is pointing to the NAT
Destination Target Status Propagated 10.0.0.0/16 local Active No 0.0.0.0/0 eni-xxxxxxxx / i-xxxxxxxx Active No
-
The private subnet is associated with the main route table
-
The second (not main) route table is is pointing to the gateway
Destination Target Status Propagated 10.0.0.0/16 local Active No 0.0.0.0/0 igw-xxxxxxxx Active No
-
The public subnet is associated with this route table
- All the security groups are wide open for troubleshooting
-
I have enable the NAT for forwarding (and sometimes masquerading…see below)
sysctl -q -w net.ipv4.ip_forward=1 net.ipv4.conf.eth0.send_redirects=0 PRIVATE_SUBNETS="10.0.1.0/24" for SUBNET in $PRIVATE_SUBNETS; do iptables -t nat -C POSTROUTING -o eth0 -s $SUBNET -j MASQUERADE 2> /dev/null || iptables -t nat -A POSTROUTING -o eth0 -s $SUBNET -j MASQUERADE done
-
There is a server in the public and private subnets
- Both are built from the same RHEL7 AMI.
Testing
- I AM root during all of this…
- I have tried the setting
sslverify=0
in/etc/yum.repos.d/redhat-rhui.repo
and/etc/yum.repos.d/redhat-rhui-client-config.repo
and then executingyum clean all
. It did not solve the issue. - Both the server in the private and public subnet can ping 8.8.8.8
- Both servers are able to resolve names to IPs, including the names of the yum repositories.
-
Both the private and public server seem to be able to see and touch the the following RPM:
$ rpm -Uvh ftp://ftp.pbone.net/mirror/ftp.sourceforge.net/pub/sourceforge/o/os/osolinux/update/RPMS.e/elinks-0.12-0.32.pre5mgc30.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/ftp.sourceforge.net/pub/sourceforge/o/os/osolinux/update/RPMS.e/elinks-0.12-0.32.pre5mgc30.x86_64.rpm error: Failed dependencies: libgc.so.1()(64bit) is needed by elinks-0.12-0.32.pre5mgc30.x86_64 libgpm.so.2()(64bit) is needed by elinks-0.12-0.32.pre5mgc30.x86_64 libmozjs185.so.1.0()(64bit) is needed by elinks-0.12-0.32.pre5mgc30.x86_64 libnss_compat_ossl.so.0()(64bit) is needed by elinks-0.12-0.32.pre5mgc30.x86_64
If I attempt to load a new repository on the private server I get a timeout…
$ rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
Retrieving http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
curl: (7) Failed connect to pkgs.repoforge.org:80; Connection timed out
error: skipping http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm - transfer failed
- Turning masquerading on or off seems to make no difference.
=== POSTING HERE FOR OTHER'S EDIFICATION ===
Hi Michael. Thank for the comment.
I did actually use traceroute and saw the packets are getting to the NAT from the private server in question. I also saw packets leaving the server which should have been the forwarded packets. And thats it. Nothing more.
I get the impression the requests are getting rejected by the repositories, since ping and internet remote rpms seem to work … but I don't know why. I get the same result with masquerading on and off.
The NAT server was built automatically during the process of creating the VPC. The security groups were created using the 'Scenario 2' page…but are currently wide open.
Best Answer
In a very similar situation as the one described in the question I was able to solve it by adding the
proxy
configuration to/etc/yum.conf
.Like this: