Reusing connections with session-enabled web applications

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Reusing connections with session-enabled web applications

Barcode S K
 Hello,

I recently recorded my application with JMeter.

   - The application is SSL-enabled.
   - Application launch, login, and a screen launch are inside a Once-Only
   Controller.
   - The actual *transaction*- which involves just one POST request- is in
   a separate transaction controller.
   - Sleep time of 1 second is configured.


I have been running 1 VU tests, and I have observed that (for the
*transaction* alone):

   1. While running with think time, average response time is about 42 ms.
   2. While running without think time, average response time is 31 ms.

To eliminate latency and excess load, I am running only 1 VU tests for
30-60 seconds on the same machine that hosts the application. I have
observed (via netstat) that JMeter opens a new connection every 2-3 seconds
and I believe the overhead of new handshakes plus the SSL handshake (and
other connection-related processing) is the reason between this difference.

I have not seen this behavior with other tools like NeoLoad and OATS. In
those tools, when an application that has sessions is run in a load test,
there is only one socket (connection) per virtual client and it remains
open throughout the run.

Is there any way to ensure that JMeter does not open new sockets for new
requests in the middle of the run like this? Keep-alive is enabled for HTTP
requests.

-SK
Reply | Threaded
Open this post in threaded view
|

Re: Reusing connections with session-enabled web applications

Felix Schumacher

Am 18.08.20 um 18:18 schrieb Barcode S K:
>  Hello,
>
> I recently recorded my application with JMeter.

Which version of JMeter are you using?

>
>    - The application is SSL-enabled.
>    - Application launch, login, and a screen launch are inside a Once-Only
>    Controller.
>    - The actual *transaction*- which involves just one POST request- is in
>    a separate transaction controller.
>    - Sleep time of 1 second is configured.
>
>
> I have been running 1 VU tests, and I have observed that (for the
> *transaction* alone):
>
>    1. While running with think time, average response time is about 42 ms.
>    2. While running without think time, average response time is 31 ms.
>
> To eliminate latency and excess load, I am running only 1 VU tests for
> 30-60 seconds on the same machine that hosts the application. I have
> observed (via netstat) that JMeter opens a new connection every 2-3 seconds
> and I believe the overhead of new handshakes plus the SSL handshake (and
> other connection-related processing) is the reason between this difference.

We changed the default for httpclient4.time_to_live to 60000. If you are
not using the newest version (which is 5.3 or you can try a nightly
build), make sure, that you have not overwritten that setting locally.

The change was tracked with
https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.

Hope this helps

 Felix

>
> I have not seen this behavior with other tools like NeoLoad and OATS. In
> those tools, when an application that has sessions is run in a load test,
> there is only one socket (connection) per virtual client and it remains
> open throughout the run.
>
> Is there any way to ensure that JMeter does not open new sockets for new
> requests in the middle of the run like this? Keep-alive is enabled for HTTP
> requests.
>
> -SK
>
Reply | Threaded
Open this post in threaded view
|

Re: Reusing connections with session-enabled web applications

Barcode S K
Thank you, Felix. This should help.

I tried running the same tests through JMeter 5.3. It is true- the
connections were not reopened. That's a great change!
I believe there is still a problem, though. I earlier had response times of
31 ms without think, and 42 ms with think. Now it's 31 and 37, and it is
consistent. An improvement- but the question still remains why the extra 6
ms. The impact becomes more pronounced when there is high latency. In the
tests that I am referencing here, I have had to deal with just 40-45
microseconds of latency (everything is on the same host).


I have had some problems with JMeter 5.3 because of which I went back to
5.2. I will start a different thread for this. I will just give you an idea
of what I see every time a load test ends with 5.3.

Tidying up ...    @ Wed Aug 19 12:49:45 IST 2020 (1597821585946)
... end of run
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[DestroyJavaVM,5,main], stackTrace:
Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#park at line:175
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
at line:2039
java.awt.EventQueue#getNextEvent at line:554
java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
java.awt.EventDispatchThread#pumpEventsForFilter at line:116
java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
java.awt.EventDispatchThread#pumpEvents at line:101
java.awt.EventDispatchThread#pumpEvents at line:93
java.awt.EventDispatchThread#run at line:82

Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
sun.awt.AWTAutoShutdown#run at line:314
java.lang.Thread#run at line:748

^C

This error has appeared on all (three) Linux VMs (Oracle Linux) that I've
tried, plus on my Windows system that had a previous installation of
JMeter. I did not encounter this error while running it on another Windows
server that had never had JMeter installed on it.




On Tue, Aug 18, 2020 at 10:38 PM Felix Schumacher <
[hidden email]> wrote:

>
> Am 18.08.20 um 18:18 schrieb Barcode S K:
> >  Hello,
> >
> > I recently recorded my application with JMeter.
>
> Which version of JMeter are you using?
>
> >
> >    - The application is SSL-enabled.
> >    - Application launch, login, and a screen launch are inside a
> Once-Only
> >    Controller.
> >    - The actual *transaction*- which involves just one POST request- is
> in
> >    a separate transaction controller.
> >    - Sleep time of 1 second is configured.
> >
> >
> > I have been running 1 VU tests, and I have observed that (for the
> > *transaction* alone):
> >
> >    1. While running with think time, average response time is about 42
> ms.
> >    2. While running without think time, average response time is 31 ms.
> >
> > To eliminate latency and excess load, I am running only 1 VU tests for
> > 30-60 seconds on the same machine that hosts the application. I have
> > observed (via netstat) that JMeter opens a new connection every 2-3
> seconds
> > and I believe the overhead of new handshakes plus the SSL handshake (and
> > other connection-related processing) is the reason between this
> difference.
>
> We changed the default for httpclient4.time_to_live to 60000. If you are
> not using the newest version (which is 5.3 or you can try a nightly
> build), make sure, that you have not overwritten that setting locally.
>
> The change was tracked with
> https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.
>
> Hope this helps
>
>  Felix
>
> >
> > I have not seen this behavior with other tools like NeoLoad and OATS. In
> > those tools, when an application that has sessions is run in a load test,
> > there is only one socket (connection) per virtual client and it remains
> > open throughout the run.
> >
> > Is there any way to ensure that JMeter does not open new sockets for new
> > requests in the middle of the run like this? Keep-alive is enabled for
> HTTP
> > requests.
> >
> > -SK
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Reusing connections with session-enabled web applications

Felix Schumacher

Am 19.08.20 um 09:33 schrieb Barcode S K:

> Thank you, Felix. This should help.
>
> I tried running the same tests through JMeter 5.3. It is true- the
> connections were not reopened. That's a great change!
> I believe there is still a problem, though. I earlier had response times of
> 31 ms without think, and 42 ms with think. Now it's 31 and 37, and it is
> consistent. An improvement- but the question still remains why the extra 6
> ms. The impact becomes more pronounced when there is high latency. In the
> tests that I am referencing here, I have had to deal with just 40-45
> microseconds of latency (everything is on the same host).
A minimal test plan to reproduce such issues is always good to have.

>
>
> I have had some problems with JMeter 5.3 because of which I went back to
> 5.2. I will start a different thread for this. I will just give you an idea
> of what I see every time a load test ends with 5.3.
>
> Tidying up ...    @ Wed Aug 19 12:49:45 IST 2020 (1597821585946)
> ... end of run
> The JVM should have exited but did not.
> The following non-daemon threads are still running (DestroyJavaVM is OK):
> Thread[DestroyJavaVM,5,main], stackTrace:
> Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
> java.util.concurrent.locks.LockSupport#park at line:175
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
> at line:2039
> java.awt.EventQueue#getNextEvent at line:554
> java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
> java.awt.EventDispatchThread#pumpEventsForFilter at line:116
> java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
> java.awt.EventDispatchThread#pumpEvents at line:101
> java.awt.EventDispatchThread#pumpEvents at line:93
> java.awt.EventDispatchThread#run at line:82
>
> Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
> sun.awt.AWTAutoShutdown#run at line:314
> java.lang.Thread#run at line:748

We had some problems with the new default LAF, which might lead to
unexpected problems in GUI Mode (affects 5.3). You can try to run a
nightly build (where the bugs should be fixed) or switch to a
non-darklaf mode.

Felix

>
> ^C
>
> This error has appeared on all (three) Linux VMs (Oracle Linux) that I've
> tried, plus on my Windows system that had a previous installation of
> JMeter. I did not encounter this error while running it on another Windows
> server that had never had JMeter installed on it.
>
>
>
>
> On Tue, Aug 18, 2020 at 10:38 PM Felix Schumacher <
> [hidden email]> wrote:
>
>> Am 18.08.20 um 18:18 schrieb Barcode S K:
>>>  Hello,
>>>
>>> I recently recorded my application with JMeter.
>> Which version of JMeter are you using?
>>
>>>    - The application is SSL-enabled.
>>>    - Application launch, login, and a screen launch are inside a
>> Once-Only
>>>    Controller.
>>>    - The actual *transaction*- which involves just one POST request- is
>> in
>>>    a separate transaction controller.
>>>    - Sleep time of 1 second is configured.
>>>
>>>
>>> I have been running 1 VU tests, and I have observed that (for the
>>> *transaction* alone):
>>>
>>>    1. While running with think time, average response time is about 42
>> ms.
>>>    2. While running without think time, average response time is 31 ms.
>>>
>>> To eliminate latency and excess load, I am running only 1 VU tests for
>>> 30-60 seconds on the same machine that hosts the application. I have
>>> observed (via netstat) that JMeter opens a new connection every 2-3
>> seconds
>>> and I believe the overhead of new handshakes plus the SSL handshake (and
>>> other connection-related processing) is the reason between this
>> difference.
>>
>> We changed the default for httpclient4.time_to_live to 60000. If you are
>> not using the newest version (which is 5.3 or you can try a nightly
>> build), make sure, that you have not overwritten that setting locally.
>>
>> The change was tracked with
>> https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.
>>
>> Hope this helps
>>
>>  Felix
>>
>>> I have not seen this behavior with other tools like NeoLoad and OATS. In
>>> those tools, when an application that has sessions is run in a load test,
>>> there is only one socket (connection) per virtual client and it remains
>>> open throughout the run.
>>>
>>> Is there any way to ensure that JMeter does not open new sockets for new
>>> requests in the middle of the run like this? Keep-alive is enabled for
>> HTTP
>>> requests.
>>>
>>> -SK
>>>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Reusing connections with session-enabled web applications

Barcode S K
I was using the "system" look and feel, and still had the problem. Maybe
one of the new builds will help.

Could you help me figure out why there is this additional overhead when I
run with think time? TTL is now 60 seconds and my runs are less than 60
seconds long. The overhead has reduced by 5 ms (currently at 6 ms), but it
is still there. What could be the reason? I run exactly the same thing with
exactly the same number of VUs on the very same system, just with and
without think time.

-SK

On Wed, Aug 19, 2020 at 2:38 PM Felix Schumacher <
[hidden email]> wrote:

>
> Am 19.08.20 um 09:33 schrieb Barcode S K:
> > Thank you, Felix. This should help.
> >
> > I tried running the same tests through JMeter 5.3. It is true- the
> > connections were not reopened. That's a great change!
> > I believe there is still a problem, though. I earlier had response times
> of
> > 31 ms without think, and 42 ms with think. Now it's 31 and 37, and it is
> > consistent. An improvement- but the question still remains why the extra
> 6
> > ms. The impact becomes more pronounced when there is high latency. In the
> > tests that I am referencing here, I have had to deal with just 40-45
> > microseconds of latency (everything is on the same host).
> A minimal test plan to reproduce such issues is always good to have.
> >
> >
> > I have had some problems with JMeter 5.3 because of which I went back to
> > 5.2. I will start a different thread for this. I will just give you an
> idea
> > of what I see every time a load test ends with 5.3.
> >
> > Tidying up ...    @ Wed Aug 19 12:49:45 IST 2020 (1597821585946)
> > ... end of run
> > The JVM should have exited but did not.
> > The following non-daemon threads are still running (DestroyJavaVM is OK):
> > Thread[DestroyJavaVM,5,main], stackTrace:
> > Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
> > java.util.concurrent.locks.LockSupport#park at line:175
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
> > at line:2039
> > java.awt.EventQueue#getNextEvent at line:554
> > java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
> > java.awt.EventDispatchThread#pumpEventsForFilter at line:116
> > java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
> > java.awt.EventDispatchThread#pumpEvents at line:101
> > java.awt.EventDispatchThread#pumpEvents at line:93
> > java.awt.EventDispatchThread#run at line:82
> >
> > Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
> > sun.awt.AWTAutoShutdown#run at line:314
> > java.lang.Thread#run at line:748
>
> We had some problems with the new default LAF, which might lead to
> unexpected problems in GUI Mode (affects 5.3). You can try to run a
> nightly build (where the bugs should be fixed) or switch to a
> non-darklaf mode.
>
> Felix
>
> >
> > ^C
> >
> > This error has appeared on all (three) Linux VMs (Oracle Linux) that I've
> > tried, plus on my Windows system that had a previous installation of
> > JMeter. I did not encounter this error while running it on another
> Windows
> > server that had never had JMeter installed on it.
> >
> >
> >
> >
> > On Tue, Aug 18, 2020 at 10:38 PM Felix Schumacher <
> > [hidden email]> wrote:
> >
> >> Am 18.08.20 um 18:18 schrieb Barcode S K:
> >>>  Hello,
> >>>
> >>> I recently recorded my application with JMeter.
> >> Which version of JMeter are you using?
> >>
> >>>    - The application is SSL-enabled.
> >>>    - Application launch, login, and a screen launch are inside a
> >> Once-Only
> >>>    Controller.
> >>>    - The actual *transaction*- which involves just one POST request- is
> >> in
> >>>    a separate transaction controller.
> >>>    - Sleep time of 1 second is configured.
> >>>
> >>>
> >>> I have been running 1 VU tests, and I have observed that (for the
> >>> *transaction* alone):
> >>>
> >>>    1. While running with think time, average response time is about 42
> >> ms.
> >>>    2. While running without think time, average response time is 31 ms.
> >>>
> >>> To eliminate latency and excess load, I am running only 1 VU tests for
> >>> 30-60 seconds on the same machine that hosts the application. I have
> >>> observed (via netstat) that JMeter opens a new connection every 2-3
> >> seconds
> >>> and I believe the overhead of new handshakes plus the SSL handshake
> (and
> >>> other connection-related processing) is the reason between this
> >> difference.
> >>
> >> We changed the default for httpclient4.time_to_live to 60000. If you are
> >> not using the newest version (which is 5.3 or you can try a nightly
> >> build), make sure, that you have not overwritten that setting locally.
> >>
> >> The change was tracked with
> >> https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.
> >>
> >> Hope this helps
> >>
> >>  Felix
> >>
> >>> I have not seen this behavior with other tools like NeoLoad and OATS.
> In
> >>> those tools, when an application that has sessions is run in a load
> test,
> >>> there is only one socket (connection) per virtual client and it remains
> >>> open throughout the run.
> >>>
> >>> Is there any way to ensure that JMeter does not open new sockets for
> new
> >>> requests in the middle of the run like this? Keep-alive is enabled for
> >> HTTP
> >>> requests.
> >>>
> >>> -SK
> >>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Reusing connections with session-enabled web applications

Felix Schumacher

Am 19.08.20 um 12:34 schrieb Barcode S K:
> I was using the "system" look and feel, and still had the problem. Maybe
> one of the new builds will help.
>
> Could you help me figure out why there is this additional overhead when I
> run with think time? TTL is now 60 seconds and my runs are less than 60
> seconds long. The overhead has reduced by 5 ms (currently at 6 ms), but it
> is still there. What could be the reason? I run exactly the same thing with
> exactly the same number of VUs on the very same system, just with and
> without think time.

There can be a lot of reasons (meaning I don't know :)), but if you have
a minimal test plan that shows that behaviour, we could look into it.

Felix

>
> -SK
>
> On Wed, Aug 19, 2020 at 2:38 PM Felix Schumacher <
> [hidden email]> wrote:
>
>> Am 19.08.20 um 09:33 schrieb Barcode S K:
>>> Thank you, Felix. This should help.
>>>
>>> I tried running the same tests through JMeter 5.3. It is true- the
>>> connections were not reopened. That's a great change!
>>> I believe there is still a problem, though. I earlier had response times
>> of
>>> 31 ms without think, and 42 ms with think. Now it's 31 and 37, and it is
>>> consistent. An improvement- but the question still remains why the extra
>> 6
>>> ms. The impact becomes more pronounced when there is high latency. In the
>>> tests that I am referencing here, I have had to deal with just 40-45
>>> microseconds of latency (everything is on the same host).
>> A minimal test plan to reproduce such issues is always good to have.
>>>
>>> I have had some problems with JMeter 5.3 because of which I went back to
>>> 5.2. I will start a different thread for this. I will just give you an
>> idea
>>> of what I see every time a load test ends with 5.3.
>>>
>>> Tidying up ...    @ Wed Aug 19 12:49:45 IST 2020 (1597821585946)
>>> ... end of run
>>> The JVM should have exited but did not.
>>> The following non-daemon threads are still running (DestroyJavaVM is OK):
>>> Thread[DestroyJavaVM,5,main], stackTrace:
>>> Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
>>> java.util.concurrent.locks.LockSupport#park at line:175
>>>
>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
>>> at line:2039
>>> java.awt.EventQueue#getNextEvent at line:554
>>> java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
>>> java.awt.EventDispatchThread#pumpEventsForFilter at line:116
>>> java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
>>> java.awt.EventDispatchThread#pumpEvents at line:101
>>> java.awt.EventDispatchThread#pumpEvents at line:93
>>> java.awt.EventDispatchThread#run at line:82
>>>
>>> Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
>>> sun.awt.AWTAutoShutdown#run at line:314
>>> java.lang.Thread#run at line:748
>> We had some problems with the new default LAF, which might lead to
>> unexpected problems in GUI Mode (affects 5.3). You can try to run a
>> nightly build (where the bugs should be fixed) or switch to a
>> non-darklaf mode.
>>
>> Felix
>>
>>> ^C
>>>
>>> This error has appeared on all (three) Linux VMs (Oracle Linux) that I've
>>> tried, plus on my Windows system that had a previous installation of
>>> JMeter. I did not encounter this error while running it on another
>> Windows
>>> server that had never had JMeter installed on it.
>>>
>>>
>>>
>>>
>>> On Tue, Aug 18, 2020 at 10:38 PM Felix Schumacher <
>>> [hidden email]> wrote:
>>>
>>>> Am 18.08.20 um 18:18 schrieb Barcode S K:
>>>>>  Hello,
>>>>>
>>>>> I recently recorded my application with JMeter.
>>>> Which version of JMeter are you using?
>>>>
>>>>>    - The application is SSL-enabled.
>>>>>    - Application launch, login, and a screen launch are inside a
>>>> Once-Only
>>>>>    Controller.
>>>>>    - The actual *transaction*- which involves just one POST request- is
>>>> in
>>>>>    a separate transaction controller.
>>>>>    - Sleep time of 1 second is configured.
>>>>>
>>>>>
>>>>> I have been running 1 VU tests, and I have observed that (for the
>>>>> *transaction* alone):
>>>>>
>>>>>    1. While running with think time, average response time is about 42
>>>> ms.
>>>>>    2. While running without think time, average response time is 31 ms.
>>>>>
>>>>> To eliminate latency and excess load, I am running only 1 VU tests for
>>>>> 30-60 seconds on the same machine that hosts the application. I have
>>>>> observed (via netstat) that JMeter opens a new connection every 2-3
>>>> seconds
>>>>> and I believe the overhead of new handshakes plus the SSL handshake
>> (and
>>>>> other connection-related processing) is the reason between this
>>>> difference.
>>>>
>>>> We changed the default for httpclient4.time_to_live to 60000. If you are
>>>> not using the newest version (which is 5.3 or you can try a nightly
>>>> build), make sure, that you have not overwritten that setting locally.
>>>>
>>>> The change was tracked with
>>>> https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.
>>>>
>>>> Hope this helps
>>>>
>>>>  Felix
>>>>
>>>>> I have not seen this behavior with other tools like NeoLoad and OATS.
>> In
>>>>> those tools, when an application that has sessions is run in a load
>> test,
>>>>> there is only one socket (connection) per virtual client and it remains
>>>>> open throughout the run.
>>>>>
>>>>> Is there any way to ensure that JMeter does not open new sockets for
>> new
>>>>> requests in the middle of the run like this? Keep-alive is enabled for
>>>> HTTP
>>>>> requests.
>>>>>
>>>>> -SK
>>>>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Reusing connections with session-enabled web applications

Barcode S K
Felix,

I have created something that should help simulate the behavior.
1. Deploy SampleWebApp.war on a server of your choice (I've tested with Tomcat 8.5.45 and WebLogic 12c). Context root for the application is "/SampleWebApp"; credentials: jmeter/world.
2. Configure SSL if possible.
3. Use the attached script to simulate the issue. Disable the think time component to run without sleep and enable it to run with think time. Specify your server host, port, and scheme in "User Defined Variables".

Without SSL, the overhead was around 3 ms- and it was pretty consistent. All runs were single user runs.

In the "process" servlet in SampleWebApp.war, I have coded a sleep time of 50 ms. With no sleep in code, and because the application isn't really doing anything, response time was 720 microseconds (when think time was disabled) and about 214 ms (when think time was enabled). The source code is available in the WAR if you wanna play with that.

-SK

On Wed, Aug 19, 2020 at 4:51 PM Felix Schumacher <[hidden email]> wrote:

Am 19.08.20 um 12:34 schrieb Barcode S K:
> I was using the "system" look and feel, and still had the problem. Maybe
> one of the new builds will help.
>
> Could you help me figure out why there is this additional overhead when I
> run with think time? TTL is now 60 seconds and my runs are less than 60
> seconds long. The overhead has reduced by 5 ms (currently at 6 ms), but it
> is still there. What could be the reason? I run exactly the same thing with
> exactly the same number of VUs on the very same system, just with and
> without think time.

There can be a lot of reasons (meaning I don't know :)), but if you have
a minimal test plan that shows that behaviour, we could look into it.

Felix

>
> -SK
>
> On Wed, Aug 19, 2020 at 2:38 PM Felix Schumacher <
> [hidden email]> wrote:
>
>> Am 19.08.20 um 09:33 schrieb Barcode S K:
>>> Thank you, Felix. This should help.
>>>
>>> I tried running the same tests through JMeter 5.3. It is true- the
>>> connections were not reopened. That's a great change!
>>> I believe there is still a problem, though. I earlier had response times
>> of
>>> 31 ms without think, and 42 ms with think. Now it's 31 and 37, and it is
>>> consistent. An improvement- but the question still remains why the extra
>> 6
>>> ms. The impact becomes more pronounced when there is high latency. In the
>>> tests that I am referencing here, I have had to deal with just 40-45
>>> microseconds of latency (everything is on the same host).
>> A minimal test plan to reproduce such issues is always good to have.
>>>
>>> I have had some problems with JMeter 5.3 because of which I went back to
>>> 5.2. I will start a different thread for this. I will just give you an
>> idea
>>> of what I see every time a load test ends with 5.3.
>>>
>>> Tidying up ...    @ Wed Aug 19 12:49:45 IST 2020 (1597821585946)
>>> ... end of run
>>> The JVM should have exited but did not.
>>> The following non-daemon threads are still running (DestroyJavaVM is OK):
>>> Thread[DestroyJavaVM,5,main], stackTrace:
>>> Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
>>> java.util.concurrent.locks.LockSupport#park at line:175
>>>
>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
>>> at line:2039
>>> java.awt.EventQueue#getNextEvent at line:554
>>> java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
>>> java.awt.EventDispatchThread#pumpEventsForFilter at line:116
>>> java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
>>> java.awt.EventDispatchThread#pumpEvents at line:101
>>> java.awt.EventDispatchThread#pumpEvents at line:93
>>> java.awt.EventDispatchThread#run at line:82
>>>
>>> Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
>>> sun.awt.AWTAutoShutdown#run at line:314
>>> java.lang.Thread#run at line:748
>> We had some problems with the new default LAF, which might lead to
>> unexpected problems in GUI Mode (affects 5.3). You can try to run a
>> nightly build (where the bugs should be fixed) or switch to a
>> non-darklaf mode.
>>
>> Felix
>>
>>> ^C
>>>
>>> This error has appeared on all (three) Linux VMs (Oracle Linux) that I've
>>> tried, plus on my Windows system that had a previous installation of
>>> JMeter. I did not encounter this error while running it on another
>> Windows
>>> server that had never had JMeter installed on it.
>>>
>>>
>>>
>>>
>>> On Tue, Aug 18, 2020 at 10:38 PM Felix Schumacher <
>>> [hidden email]> wrote:
>>>
>>>> Am 18.08.20 um 18:18 schrieb Barcode S K:
>>>>>  Hello,
>>>>>
>>>>> I recently recorded my application with JMeter.
>>>> Which version of JMeter are you using?
>>>>
>>>>>    - The application is SSL-enabled.
>>>>>    - Application launch, login, and a screen launch are inside a
>>>> Once-Only
>>>>>    Controller.
>>>>>    - The actual *transaction*- which involves just one POST request- is
>>>> in
>>>>>    a separate transaction controller.
>>>>>    - Sleep time of 1 second is configured.
>>>>>
>>>>>
>>>>> I have been running 1 VU tests, and I have observed that (for the
>>>>> *transaction* alone):
>>>>>
>>>>>    1. While running with think time, average response time is about 42
>>>> ms.
>>>>>    2. While running without think time, average response time is 31 ms.
>>>>>
>>>>> To eliminate latency and excess load, I am running only 1 VU tests for
>>>>> 30-60 seconds on the same machine that hosts the application. I have
>>>>> observed (via netstat) that JMeter opens a new connection every 2-3
>>>> seconds
>>>>> and I believe the overhead of new handshakes plus the SSL handshake
>> (and
>>>>> other connection-related processing) is the reason between this
>>>> difference.
>>>>
>>>> We changed the default for httpclient4.time_to_live to 60000. If you are
>>>> not using the newest version (which is 5.3 or you can try a nightly
>>>> build), make sure, that you have not overwritten that setting locally.
>>>>
>>>> The change was tracked with
>>>> https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.
>>>>
>>>> Hope this helps
>>>>
>>>>  Felix
>>>>
>>>>> I have not seen this behavior with other tools like NeoLoad and OATS.
>> In
>>>>> those tools, when an application that has sessions is run in a load
>> test,
>>>>> there is only one socket (connection) per virtual client and it remains
>>>>> open throughout the run.
>>>>>
>>>>> Is there any way to ensure that JMeter does not open new sockets for
>> new
>>>>> requests in the middle of the run like this? Keep-alive is enabled for
>>>> HTTP
>>>>> requests.
>>>>>
>>>>> -SK
>>>>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]



---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]