buffer overflow detected
Noticed when running LargeClusterSpec with akka.test.large-cluster-spec.nodes-per-datacenter = 10 on 5 moxie servers.
Same thing with both java 1.6 and 1.7
[JVM-Node2] [INFO] [10/15/2012 09:41:16.604] [main] [Cluster(akka://second-datacenter-10)] Cluster Node [akka://second-datacenter-10@a4.local:57891] - has started up successfully
[JVM-Node1] *** buffer overflow detected ***: java terminated
[JVM-Node1] ======= Backtrace: =========
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7fda43709007]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0x107f00)[0x7fda43707f00]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0x108fbe)[0x7fda43708fbe]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x9226)[0x7fda35064226]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x971d)[0x7fda3506471d]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(_nss_ldap_gethostbyaddr_r+0x10f)[0x7fda3505efdf]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(gethostbyaddr_r+0x122)[0x7fda4370b072]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0xcaf3e)[0x7fda436caf3e]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(getaddrinfo+0xde)[0x7fda436cd18e]
[JVM-Node1] /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/libnet.so(Java_java_net_Inet6AddressImpl_lookupAllHostAddr+0xe0)[0x7fda3567e310]
Same thing with both java 1.6 and 1.7
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
java version "1.7.0_07"
OpenJDK Runtime Environment (IcedTea7 2.3.2) (7u7-2.3.2-1ubuntu0.12.04.1)
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)
Leave a comment
on 2012-10-15 16:02 *
By Patrik Nordwall
Component changed from None to cluster
Milestone set to Coltrane
on 2012-10-15 16:03 *
By Patrik Nordwall
Workaround is to use IP v4
-Djava.net.preferIPv4Stack=true
-Djava.net.preferIPv4Stack=true
on 2012-10-15 22:23 *
By viktorklang
wow, nice...
on 2012-10-17 23:54 *
By Patrik Nordwall
same thing happend in ClusterDeathWatchSpec on a0 with jdk 1.6 and IP v4
[JVM-Node1] *** buffer overflow detected ***: /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java terminated
[JVM-Node1] ======= Backtrace: =========
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f3db18a0907]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0x109800)[0x7f3db189f800]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0x10a8be)[0x7f3db18a08be]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x9226)[0x7f3d0a9e3226]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x971d)[0x7f3d0a9e371d]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(_nss_ldap_gethostbyname2_r+0x10a)[0x7f3d0a9ddcaa]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(_nss_ldap_gethostbyname_r+0x1e)[0x7f3d0a9ddebe]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(gethostbyname_r+0x135)[0x7f3db18a3485]
[JVM-Node1] /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/libnet.so(Java_java_net_Inet4AddressImpl_lookupAllHostAddr+0xca)[0x7f3d4069460a]
on 2012-10-18 00:55 *
By Patrik Nordwall
and 1.7
[JVM-Node2] A TestConductor
[JVM-Node1] - must enter a barrier (127 milliseconds)
[JVM-Node2] - must enter a barrier (16 milliseconds)
[JVM-Node1] *** buffer overflow detected ***: /usr/lib/jvm/java-7-openjdk-amd64/bin/java terminated
[JVM-Node1] ======= Backtrace: =========
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f0525614907]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0x109800)[0x7f0525613800]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(+0x10a8be)[0x7f05256148be]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x9226)[0x7efeadddd226]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x971d)[0x7efeadddd71d]
[JVM-Node1] /lib/x86_64-linux-gnu/libnss_ldap.so.2(_nss_ldap_gethostbyaddr_r+0x10f)[0x7efeaddd7fdf]
[JVM-Node1] /lib/x86_64-linux-gnu/libc.so.6(gethostbyaddr_r+0x122)[0x7f0525616972]
[JVM-Node1] /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/libnet.so(Java_java_net_Inet4AddressImpl_getHostByAddr+0xbb)[0x7f04e39dd49b]
on 2012-10-18 13:17 *
By Jonas Bonér
Damn. Any ideas?
on 2012-10-18 13:48 *
By bjorn.antonsson@typesafe.com
You should add -Xcheck:jni to ensure that the arguments coming from Java to native code is ok
Have you gotten a core dump? That can be forced by using this JVM flag -XX:OnError="gcore %p"
Analyzing that core is a different matter ;)
Have you gotten a core dump? That can be forced by using this JVM flag -XX:OnError="gcore %p"
Analyzing that core is a different matter ;)
on 2012-10-18 14:29 *
By Patrik Nordwall
Alright, added those two flags, and was able to reproduce by only running
multi-jvm:test-only akka.cluster.ClusterDeathWatch
I don't get any core file though.
Good thing is that I can reproduce it running one single tests. Will try with different jvms now.
multi-jvm:test-only akka.cluster.ClusterDeathWatch
I don't get any core file though.
Good thing is that I can reproduce it running one single tests. Will try with different jvms now.
on 2012-10-18 14:48 *
By bjorn.antonsson@typesafe.com
The poor jenkinsakka guy has ulimit -c => 0, no core files for you mister.
Should be configured to ulimit -c unlimited
Should be configured to ulimit -c unlimited
on 2012-10-18 15:03 *
By Patrik Nordwall
same thing with sun jdk 1.6, I guess they all use the same libc.so
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
*** buffer overflow detected ***: /usr/lib/jvm/java-6-sun-1.6.0.26/jre/bin/java terminated
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f35c9b52907]
/lib/x86_64-linux-gnu/libc.so.6(+0x109800)[0x7f35c9b51800]
/lib/x86_64-linux-gnu/libc.so.6(+0x10a8be)[0x7f35c9b528be]
/lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x9226)[0x7f2fa35ee226]
/lib/x86_64-linux-gnu/libnss_ldap.so.2(+0x971d)[0x7f2fa35ee71d]
/lib/x86_64-linux-gnu/libnss_ldap.so.2(_nss_ldap_gethostbyname2_r+0x10a)[0x7f2fa35e8caa]
/lib/x86_64-linux-gnu/libc.so.6(+0xcba26)[0x7f35c9b13a26]
/lib/x86_64-linux-gnu/libc.so.6(getaddrinfo+0xde)[0x7f35c9b16a8e]
/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/libnet.so(Java_java_net_Inet6AddressImpl_lookupAllHostAddr+0x160)[0x7f35312335c0]
on 2012-10-18 15:47 *
By Patrik Nordwall
Interesting, the problem goes away when I ignore the test in ClusterDeathWatchSpec that uses "unknownhost"
RootActorPath(Address("akka", system.name, "unknownhost", 2552)) / "user" / "subject"
hmm
RootActorPath(Address("akka", system.name, "unknownhost", 2552)) / "user" / "subject"
hmm
on 2012-10-18 16:28 *
By Patrik Nordwall
yes, same thing with other strings
It dumps immediately after InetAddress.getByName in akka.remote.netty.client.Client. Never returns from that.
It works fine with ip numbers, both existing and non-existing.
It works fine when using hostname that is in /etc/hosts (such as own a0).
It looks like this is triggered when dns is involved, also for existing hostnames such as a2.
It dumps immediately after InetAddress.getByName in akka.remote.netty.client.Client. Never returns from that.
It works fine with ip numbers, both existing and non-existing.
It works fine when using hostname that is in /etc/hosts (such as own a0).
It looks like this is triggered when dns is involved, also for existing hostnames such as a2.
on 2012-10-18 17:18 *
By Patrik Nordwall
I'm out of ideas, so I leave this for now.
Couldn't reproduce with small program.
Couldn't reproduce with small program.
import java.net.InetAddress;
public class Test {
public static void main(String... args) {
InetAddress a = InetAddress.getByName(args[0]);
System.out.println("Got: " + a);
}
}
on 2012-10-18 17:44 *
By bjorn.antonsson@typesafe.com
This is the stack trace of the failing thread from the core file:
[1] java.net.Inet6AddressImpl.lookupAllHostAddr (native method)
[2] java.net.InetAddress$1.lookupAllHostAddr (InetAddress.java:850)
[3] java.net.InetAddress.getAddressFromNameService (InetAddress.java:1,201)
[4] java.net.InetAddress.getAllByName0 (InetAddress.java:1,154)
[5] java.net.InetAddress.getAllByName (InetAddress.java:1,084)
[6] java.net.InetAddress.getAllByName (InetAddress.java:1,020)
[7] java.net.InetAddress.getByName (InetAddress.java:970)
[8] akka.remote.netty.ActiveRemoteClient$$anonfun$1.apply$mcV$sp (Client.scala:167)
[9] akka.util.Switch.transcend (LockUtil.scala:27)
[10] akka.util.Switch.switchOn (LockUtil.scala:48)
[11] akka.remote.netty.ActiveRemoteClient.connect (Client.scala:152)
[12] akka.remote.netty.NettyRemoteTransport.send (NettyRemoteSupport.scala:241)
[13] akka.remote.RemoteActorRef.sendSystemMessage (RemoteActorRefProvider.scala:232)
[14] akka.actor.dungeon.DeathWatch$$anonfun$watch$1.apply$mcV$sp (DeathWatch.scala:21)
[15] akka.actor.dungeon.DeathWatch$$anonfun$watch$1.apply (DeathWatch.scala:20)
[16] akka.actor.dungeon.DeathWatch$$anonfun$watch$1.apply (DeathWatch.scala:20)
[17] akka.actor.dungeon.DeathWatch$class.maintainAddressTerminatedSubscription (DeathWatch.scala:141)
[18] akka.actor.dungeon.DeathWatch$class.watch (DeathWatch.scala:20)
[19] akka.actor.ActorCell.watch (ActorCell.scala:289)
[20] akka.cluster.ClusterDeathWatchSpec$$anonfun$1$$anonfun$apply$mcV$sp$5$$anonfun$apply$mcV$sp$6$$anonfun$apply$mcV$sp$15$$anon$3.<init> (ClusterDeathWatchSpec.scala:107)
[21] akka.cluster.ClusterDeathWatchSpec$$anonfun$1$$anonfun$apply$mcV$sp$5$$anonfun$apply$mcV$sp$6$$anonfun$apply$mcV$sp$15.apply (ClusterDeathWatchSpec.scala:106)
[22] akka.cluster.ClusterDeathWatchSpec$$anonfun$1$$anonfun$apply$mcV$sp$5$$anonfun$apply$mcV$sp$6$$anonfun$apply$mcV$sp$15.apply (ClusterDeathWatchSpec.scala:106)
[23] akka.actor.ActorCell.newActor (ActorCell.scala:444)
[24] akka.actor.ActorCell.create (ActorCell.scala:462)
[25] akka.actor.ActorCell.systemInvoke (ActorCell.scala:334)
[26] akka.dispatch.Mailbox.processAllSystemMessages (Mailbox.scala:256)
[27] akka.dispatch.Mailbox.run (Mailbox.scala:211)
[28] akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec (AbstractDispatcher.scala:502)
[29] scala.concurrent.forkjoin.ForkJoinTask.doExec (ForkJoinTask.java:262)
[30] scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask (ForkJoinPool.java:975)
[31] scala.concurrent.forkjoin.ForkJoinPool.runWorker (ForkJoinPool.java:1,478)
[32] scala.concurrent.forkjoin.ForkJoinWorkerThread.run (ForkJoinWorkerThread.java:104)
on 2012-10-18 17:47 *
By Patrik Nordwall
good, Client.scala:167 exactly match what I have found
on 2012-10-18 17:58 *
By bjorn.antonsson@typesafe.com
From the stack trace in native code it looks like the LDAP host lookup that's failing in some way, or getting back a huge answer that overflows the buffer, but that info was in the printout as well.
Maybe if we had debug symbols available for the native libraries. I'll talk to Toni.
Maybe if we had debug symbols available for the native libraries. I'll talk to Toni.
#0 0x00007f074e868445 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f074e86bbab in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007f074e8a649e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007f074e93c907 in __fortify_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00007f074e93b800 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
#5 0x00007f074e93c8be in __fdelt_warn () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x00007f01283bc226 in ?? () from /lib/x86_64-linux-gnu/libnss_ldap.so.2
#7 0x00007f01283bc71d in ?? () from /lib/x86_64-linux-gnu/libnss_ldap.so.2
#8 0x00007f01283b6caa in _nss_ldap_gethostbyname2_r () from /lib/x86_64-linux-gnu/libnss_ldap.so.2
#9 0x00007f074e8fda26 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#10 0x00007f074e900a8e in getaddrinfo () from /lib/x86_64-linux-gnu/libc.so.6
#11 0x00007f06b5e3d5c0 in Java_java_net_Inet6AddressImpl_lookupAllHostAddr ()
from /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/libnet.so
on 2012-10-19 14:20 *
By bjorn.antonsson@typesafe.com
Assigned to set to bjorn.antonsson@typesafe.com
Status changed from New to Accepted
Backtrace with debug symbols. No symbols available for the culprit libnss_ldap.so though. Nothing really new here.
#0 0x00007f074e868445 in __GI_raise (sig=<optimized out>)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
64 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007f074e868445 in __GI_raise (sig=<optimized out>)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x00007f074e86bbab in __GI_abort () at abort.c:91
#2 0x00007f074e8a649e in __libc_message (do_abort=2,
fmt=0x7f074e9adf3f "*** %s ***: %s terminated\n")
at ../sysdeps/unix/sysv/linux/libc_fatal.c:201
#3 0x00007f074e93c907 in __GI___fortify_fail (
msg=0x7f074e9aded6 "buffer overflow detected") at fortify_fail.c:32
#4 0x00007f074e93b800 in __GI___chk_fail () at chk_fail.c:29
#5 0x00007f074e93c8be in __fdelt_chk (d=<optimized out>) at fdelt_chk.c:26
#6 0x00007f01283bc226 in ?? () from /lib/x86_64-linux-gnu/libnss_ldap.so.2
#7 0x00007f01283bc71d in ?? () from /lib/x86_64-linux-gnu/libnss_ldap.so.2
#8 0x00007f01283b6caa in _nss_ldap_gethostbyname2_r ()
from /lib/x86_64-linux-gnu/libnss_ldap.so.2
#9 0x00007f074e8fda26 in gaih_inet (name=0x7f067cc1fbd0 "unknownhost",
service=<optimized out>, req=0x7f06b634e9f0, pai=<optimized out>,
naddrs=0x7f06b634e950) at ../sysdeps/posix/getaddrinfo.c:940
#10 0x00007f074e900a8e in __GI_getaddrinfo (name=0x7f067cc1fbd0 "unknownhost",
service=<optimized out>, hints=0x7f06b634e9f0, pai=0x7f06b634e9e8)
at ../sysdeps/posix/getaddrinfo.c:2423
#11 0x00007f06b5e3d5c0 in Java_java_net_Inet6AddressImpl_lookupAllHostAddr ()
from /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/libnet.so
on 2012-10-25 15:52 *
By bjorn.antonsson@typesafe.com
So I built a JDK with extra debug printouts in the libnet.so library and forced the JVM to use ipv4 and made sure the call to gethostbyname2_r only cared about IPv4 addresses, but it still crashed.
Turns out that the code in libnss_ldap still queries for all addresses and then tries to filter out the unwanted ones.
Running the application with ltrace to see what library calls are made but it seems to have ground to a halt.
Will try to minimize the test further, to find out what's triggering the change in behaviour.
Turns out that the code in libnss_ldap still queries for all addresses and then tries to filter out the unwanted ones.
Running the application with ltrace to see what library calls are made but it seems to have ground to a halt.
Will try to minimize the test further, to find out what's triggering the change in behaviour.
on 2012-10-25 19:32 *
By viktorklang
What is the back up plan here?
on 2012-10-26 17:17 *
By bjorn.antonsson@typesafe.com
So I've instrumented the libnss_ldap2.so library and the problem is that on a0 the library get an FD number that is higher than FD_SETSIZE, and the try do a FD_SET with that (which is completely forbidden or a least documented as undefined behaviour).
This seems like a configuration problem of the aX machines.
Test run output on a0 where it crashes
Test run output on a2 where it works (still high number though)
This seems like a configuration problem of the aX machines.
Test run output on a0 where it crashes
(hosts.c) _nss_ldap_gethostbyname2_r:194 about to look up by name for unknownhost
(hosts.c) _nss_ldap_gethostbyname2_r:211 about to query
(nslcd-prot.c) nslcd_client_open:62 about to check
(nslcd-prot.c) nslcd_client_open:73 about to open socket Success
(nslcd-prot.c) nslcd_client_open:75 connect returned 0 Success : Success
(nslcd-prot.c) nslcd_client_open:88 about to create a stream
(tio.c) tio_fdopen:139 FD: 1185
(hosts.c) _nss_ldap_gethostbyname2_r:220 about to write Success
(hosts.c) _nss_ldap_gethostbyname2_r:228 about to flush Success
(tio.c) tio_flush:383 about to prepare
(tio.c) tio_flush:386 about to loop
(tio.c) tio_flush:389 about to select
(tio.c) tio_select:187 FD_ZERO
(tio.c) tio_select:189 FD_SET FD: 1185 FD_SETSIZE: 1024
Test run output on a2 where it works (still high number though)
(hosts.c) _nss_ldap_gethostbyname2_r:194 about to look up by name for unknownhost
(hosts.c) _nss_ldap_gethostbyname2_r:211 about to query
(nslcd-prot.c) nslcd_client_open:62 about to check
(nslcd-prot.c) nslcd_client_open:73 about to open socket Success
(nslcd-prot.c) nslcd_client_open:75 connect returned 0 Success : Success
(nslcd-prot.c) nslcd_client_open:88 about to create a stream
(tio.c) tio_fdopen:139 FD: 417
(hosts.c) _nss_ldap_gethostbyname2_r:220 about to write Success
(hosts.c) _nss_ldap_gethostbyname2_r:228 about to flush Success
(tio.c) tio_flush:383 about to prepare
(tio.c) tio_flush:386 about to loop
(tio.c) tio_flush:389 about to select
(tio.c) tio_select:187 FD_ZERO
(tio.c) tio_select:189 FD_SET FD: 417 FD_SETSIZE: 1024
(tio.c) tio_select:192 check deadline
(tio.c) tio_select:199 wait for activity
(tio.c) tio_select:213 select 2
(tio.c) tio_flush:395 about to write
(tio.c) tio_flush:402 done
(hosts.c) _nss_ldap_gethostbyname2_r:236 about to read
(tio.c) tio_select:187 FD_ZERO
(tio.c) tio_select:189 FD_SET FD: 417 FD_SETSIZE: 1024
(tio.c) tio_select:192 check deadline
(tio.c) tio_select:199 wait for activity
(tio.c) tio_select:205 select 1
(hosts.c) _nss_ldap_gethostbyname2_r:240 about to read
(hosts.c) _nss_ldap_gethostbyname2_r:247 checking response
(tio.c) tio_flush:383 about to prepare
(tio.c) tio_flush:386 about to loop
(tio.c) tio_flush:402 done
on 2012-10-26 17:25 *
By bjorn.antonsson@typesafe.com
Just looked at the minimal test that always works and only have the JDK as a dependency.
It has an FD of 4. Are we leaking file descriptors?
It has an FD of 4. Are we leaking file descriptors?
(hosts.c) _nss_ldap_gethostbyname2_r:194 about to look up by name for unknownhost
(hosts.c) _nss_ldap_gethostbyname2_r:211 about to query
(nslcd-prot.c) nslcd_client_open:62 about to check
(nslcd-prot.c) nslcd_client_open:73 about to open socket Success
(nslcd-prot.c) nslcd_client_open:75 connect returned 0 Success : Success
(nslcd-prot.c) nslcd_client_open:88 about to create a stream
(tio.c) tio_fdopen:139 FD: 4
(hosts.c) _nss_ldap_gethostbyname2_r:220 about to write Success
(hosts.c) _nss_ldap_gethostbyname2_r:228 about to flush Success
(tio.c) tio_flush:383 about to prepare
(tio.c) tio_flush:386 about to loop
(tio.c) tio_flush:389 about to select
(tio.c) tio_select:187 FD_ZERO
(tio.c) tio_select:189 FD_SET FD: 4 FD_SETSIZE: 1024
(tio.c) tio_select:192 check deadline
(tio.c) tio_select:199 wait for activity
(tio.c) tio_select:213 select 2
(tio.c) tio_flush:395 about to write
(tio.c) tio_flush:402 done
(hosts.c) _nss_ldap_gethostbyname2_r:236 about to read
(tio.c) tio_select:187 FD_ZERO
(tio.c) tio_select:189 FD_SET FD: 4 FD_SETSIZE: 1024
(tio.c) tio_select:192 check deadline
(tio.c) tio_select:199 wait for activity
(tio.c) tio_select:205 select 1
(hosts.c) _nss_ldap_gethostbyname2_r:240 about to read
(hosts.c) _nss_ldap_gethostbyname2_r:247 checking response
(tio.c) tio_flush:383 about to prepare
(tio.c) tio_flush:386 about to loop
(tio.c) tio_flush:402 done</code>
</pre>
We can't and wont fix the broken nss_ldap library will close this ticket.
The saga continues in #2659
The saga continues in #2659
Updating tickets (#939, #940, #1941, #2081, #2126, #2213, #2214, #2215, #2219, #2222, #2223, #2239, #2240, #2249, #2250, #2252, #2253, #2254, #2256, #2259, #2263, #2264, #2265, #2267, #2270, #2271, #2275, #2277, #2286, #2287, #2289, #2290, #2303, #2304, #2308, #2310, #2311, #2317, #2323, #2331, #2374, #2392, #2394, #2405, #2408, #2423, #2424, #2425, #2440, #2444, #2445, #2449, #2453, #2456, #2459, #2461, #2473, #2477, #2485, #2491, #2495, #2498, #2501, #2505, #2515, #2517, #2523, #2534, #2541, #2544, #2545, #2549, #2582, #2583, #2588, #2589, #2598, #2599, #2618, #2623, #2626, #2627, #2630, #2631, #2633, #2634, #2635, #2637, #2638, #2642, #2643, #2646, #2647, #2648, #2649, #2650, #2653, #2655, #2657, #2658)