hdfs exceeds the limit of concurrent xcievers

版本: hadoop cdh5.4 datanode jstack,很多这样的线程

"DataXceiver for client unix:/var/run/hdfs-sockets/dn [Waiting for operation #1]" daemon prio=10 tid=0x00007ffc42de9000 nid=0x68f8 waiting on condition [0x00007ffacbd1d000]
   java.lang.Thread.State: WAITING (parking)
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x00000007a5d3d568> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
	at org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:316)
	at org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:418)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:241)
	at java.lang.Thread.run(Thread.java:745)

 

相关issue

[HADOOP-11333] – Fix deadlock in DomainSocketWatcher when the notification pipe is full
[HADOOP-10404] – Some accesses to DomainSocketWatcher#closed are not protected by lock https://issues.apache.org/jira/browse/HADOOP-11604
https://issues.apache.org/jira/browse/HADOOP-11802
https://issues.apache.org/jira/browse/HDFS-8429  

updatedupdated2024-08-302024-08-30