EventBus: ActorClassification.dissociate is O(n^2)
From the mailing list:
From: Steve Sowerby <steve.sowerby@gmail.com>
Subject: [akka-user] LocalDeathWatch seemingly eating CPU
Date: March 12, 2012 20:03:54 GMT+01:00
Hi,
I'm currently reworking some ETL software to use Akka for parts of the problem which have the biggest concurrency headaches.
So far it's going pretty well but during performance analysis I saw this slightly worrying CPU profile come out of JProfiler.
Please see attached.
This is Akka 2.0 with Oracle JDK 1.7.04 EA running on a four core box with several thousand concurrent Ask futures floating around.
So it seems that as the temporary Ask actors complete the overhead in LocalDeathWatch is pretty high.
Naively it seems there might be some kind of O(N^2) problem with the Vector being used.
As you can see from the highlighted row in the profile we go from 24K calls to diassociateMonitor to 12M calls to Vector.contains.
The Iterator.next line is probably somewhat bogus as the UI reported that the instrumentation overwhelmed the actual code for that method.
But the Vector.contains looks like it needs investigation.
I don't know the internals of Akka well at all yet, especially this part, so I thought I'd ask for the opinion of those that do before delving too deeply.
So if anyone could give me a clue what might be going on here and whether it may be a "real" problem or just some weird artifact of the profiling, I'd be very grateful.
Thanks in advance,
Steve Sowerby
From: Steve Sowerby <steve.sowerby@gmail.com>
Subject: [akka-user] LocalDeathWatch seemingly eating CPU
Date: March 12, 2012 20:03:54 GMT+01:00
Hi,
I'm currently reworking some ETL software to use Akka for parts of the problem which have the biggest concurrency headaches.
So far it's going pretty well but during performance analysis I saw this slightly worrying CPU profile come out of JProfiler.
Please see attached.
This is Akka 2.0 with Oracle JDK 1.7.04 EA running on a four core box with several thousand concurrent Ask futures floating around.
So it seems that as the temporary Ask actors complete the overhead in LocalDeathWatch is pretty high.
Naively it seems there might be some kind of O(N^2) problem with the Vector being used.
As you can see from the highlighted row in the profile we go from 24K calls to diassociateMonitor to 12M calls to Vector.contains.
The Iterator.next line is probably somewhat bogus as the UI reported that the instrumentation overwhelmed the actual code for that method.
But the Vector.contains looks like it needs investigation.
I don't know the internals of Akka well at all yet, especially this part, so I thought I'd ask for the opinion of those that do before delving too deeply.
So if anyone could give me a clue what might be going on here and whether it may be a "real" problem or just some weird artifact of the profiling, I'd be very grateful.
Thanks in advance,
Steve Sowerby
Leave a comment
on 2012-03-13 11:06 *
By viktorklang
Most viable solution is to not test for AskActorRefs as monitors as that does not make any sense.
on 2012-03-15 14:18 *
By viktorklang
Updating tickets (#620, #679, #725, #750, #752, #753, #754, #763, #789, #870, #893, #922, #953, #954, #971, #977, #983, #985, #987, #991, #1026, #1045, #1051, #1060, #1061, #1084, #1098, #1099, #1133, #1134, #1135, #1136, #1137, #1194, #1225, #1226, #1243, #1245, #1247, #1248, #1254, #1261, #1300, #1317, #1391, #1412, #1791, #1793, #1901, #1908, #1911, #1912, #1913, #1914, #1915, #1916, #1917, #1922, #1983, #1987, #1996, #1997, #1998, #2066, #2077, #2105, #2117, #2133, #2143, #2149, #2151, #2152, #2153, #2155, #2157, #2158, #2159, #2160, #2161, #2162, #2163, #2164, #2165, #2167, #2171, #2175, #2176, #2177, #2180, #2182, #2184, #2185, #2193, #2199, #2202, #2204, #2206, #2207, #2209, #2210)