Latency x Requests Received

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Latency x Requests Received

Marcio Prado
Good afternoon,

I have a question that can be simple for most.

I'm running some tests on a cloud computing with OpenStack.

When network latency increases, the number of HTTP requests met
decreases. Is this normal behavior?

I figured that the number of HTTP requests would remain the same as when
latency was low, since HTTP requests are generated regardless of
response time.

Can anyone explain this behavior?

Thank you!

--
Marcio Prado
Analista de TI - Infraestrutura e Redes
Fone: (35) 9.9821-3561
www.marcioprado.eti.br

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Latency x Requests Received

sbos-61
Hi,

yes it is normal, and this seems confusing at the beginning fro most.
It depends very much on how you design the test, the number of virtual user etc.
IF you do not use times, THEN each VU issues a request as soon as the previous one is finished.
So, when response times increase, the throughput drops.
For example, if a request takes 1 second, one VU  will issue 60 request per minute.
When a request takes 2 second, one VU will issue 30 request per minute. So you have to increase the number of VU to compensate.

IN order to overcome this problem, the best option IMHO is to use a "Constant Throughput Timer".
This allows you to make the VU wait and not to exceed a specified throughput.
So in the example above, if you insert a time limiting the throughput to 30 op /s, you will not exceed this throughput.
Even if the actual request will take 0.5, or 1 or 1.8 seconds, the throughput will not change.
You can adjust the number of VU in order to obtain the desired load.
Obviously, if the request takes more than 2 seconds, , the throughput will drop.

HTH
Sergio

Il 07/01/2019 18:52, Marcio Prado ha scritto:

> Good afternoon,
>
> I have a question that can be simple for most.
>
> I'm running some tests on a cloud computing with OpenStack.
>
> When network latency increases, the number of HTTP requests met decreases. Is this normal behavior?
>
> I figured that the number of HTTP requests would remain the same as when latency was low, since HTTP requests are generated
> regardless of response time.
>
> Can anyone explain this behavior?
>
> Thank you!
>

--

Ing. Sergio Boso




---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Latency x Requests Received

Marcio Prado
Sergio you have the JMeter documentation link that says what you
explained to me?

In addition to your email I need the link to reference my work.

Once again, thank you.

Em 07-01-2019 16:24, Sergio Boso escreveu:

> Hi,
>
> yes it is normal, and this seems confusing at the beginning fro most.
> It depends very much on how you design the test, the number of virtual
> user etc.
> IF you do not use times, THEN each VU issues a request as soon as the
> previous one is finished.
> So, when response times increase, the throughput drops.
> For example, if a request takes 1 second, one VU  will issue 60
> request per minute.
> When a request takes 2 second, one VU will issue 30 request per
> minute. So you have to increase the number of VU to compensate.
>
> IN order to overcome this problem, the best option IMHO is to use a
> "Constant Throughput Timer".
> This allows you to make the VU wait and not to exceed a specified
> throughput.
> So in the example above, if you insert a time limiting the throughput
> to 30 op /s, you will not exceed this throughput.
> Even if the actual request will take 0.5, or 1 or 1.8 seconds, the
> throughput will not change.
> You can adjust the number of VU in order to obtain the desired load.
> Obviously, if the request takes more than 2 seconds, , the throughput
> will drop.
>
> HTH
> Sergio
>
> Il 07/01/2019 18:52, Marcio Prado ha scritto:
>> Good afternoon,
>>
>> I have a question that can be simple for most.
>>
>> I'm running some tests on a cloud computing with OpenStack.
>>
>> When network latency increases, the number of HTTP requests met
>> decreases. Is this normal behavior?
>>
>> I figured that the number of HTTP requests would remain the same as
>> when latency was low, since HTTP requests are generated regardless of
>> response time.
>>
>> Can anyone explain this behavior?
>>
>> Thank you!
>>

--
Marcio Prado
Analista de TI - Infraestrutura e Redes
Fone: (35) 9.9821-3561
www.marcioprado.eti.br

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Latency x Requests Received

ra0077
Hi,

You can check
http://jmeter.apache.org/usermanual/component_reference.html#Constant_Throughput_Timer
and
http://jmeter.apache.org/usermanual/component_reference.html#Precise_Throughput_Timer

Check also open/closed model for load testing

Or there is an explanation in the book
https://leanpub.com/master-jmeter-from-load-test-to-devops
About this book, I am one of the authors and I don't know if there are
other books which explains it.


Le mar. 8 janv. 2019 à 13:29, Marcio Prado <[hidden email]>
a écrit :

> Sergio you have the JMeter documentation link that says what you
> explained to me?
>
> In addition to your email I need the link to reference my work.
>
> Once again, thank you.
>
> Em 07-01-2019 16:24, Sergio Boso escreveu:
> > Hi,
> >
> > yes it is normal, and this seems confusing at the beginning fro most.
> > It depends very much on how you design the test, the number of virtual
> > user etc.
> > IF you do not use times, THEN each VU issues a request as soon as the
> > previous one is finished.
> > So, when response times increase, the throughput drops.
> > For example, if a request takes 1 second, one VU  will issue 60
> > request per minute.
> > When a request takes 2 second, one VU will issue 30 request per
> > minute. So you have to increase the number of VU to compensate.
> >
> > IN order to overcome this problem, the best option IMHO is to use a
> > "Constant Throughput Timer".
> > This allows you to make the VU wait and not to exceed a specified
> > throughput.
> > So in the example above, if you insert a time limiting the throughput
> > to 30 op /s, you will not exceed this throughput.
> > Even if the actual request will take 0.5, or 1 or 1.8 seconds, the
> > throughput will not change.
> > You can adjust the number of VU in order to obtain the desired load.
> > Obviously, if the request takes more than 2 seconds, , the throughput
> > will drop.
> >
> > HTH
> > Sergio
> >
> > Il 07/01/2019 18:52, Marcio Prado ha scritto:
> >> Good afternoon,
> >>
> >> I have a question that can be simple for most.
> >>
> >> I'm running some tests on a cloud computing with OpenStack.
> >>
> >> When network latency increases, the number of HTTP requests met
> >> decreases. Is this normal behavior?
> >>
> >> I figured that the number of HTTP requests would remain the same as
> >> when latency was low, since HTTP requests are generated regardless of
> >> response time.
> >>
> >> Can anyone explain this behavior?
> >>
> >> Thank you!
> >>
>
> --
> Marcio Prado
> Analista de TI - Infraestrutura e Redes
> Fone: (35) 9.9821-3561
> www.marcioprado.eti.br
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Latency x Requests Received

Marcio Prado
Hi,

Yes, thank you very much for your attention.

I'll read about it.

Em 08-01-2019 11:11, Antonio Gomes Rodrigues escreveu:

> Hi,
>
> You can check
> http://jmeter.apache.org/usermanual/component_reference.html#Constant_Throughput_Timer
> and
> http://jmeter.apache.org/usermanual/component_reference.html#Precise_Throughput_Timer
>
> Check also open/closed model for load testing
>
> Or there is an explanation in the book
> https://leanpub.com/master-jmeter-from-load-test-to-devops
> About this book, I am one of the authors and I don't know if there are
> other books which explains it.
>
>
> Le mar. 8 janv. 2019 à 13:29, Marcio Prado
> <[hidden email]>
> a écrit :
>
>> Sergio you have the JMeter documentation link that says what you
>> explained to me?
>>
>> In addition to your email I need the link to reference my work.
>>
>> Once again, thank you.
>>
>> Em 07-01-2019 16:24, Sergio Boso escreveu:
>> > Hi,
>> >
>> > yes it is normal, and this seems confusing at the beginning fro most.
>> > It depends very much on how you design the test, the number of virtual
>> > user etc.
>> > IF you do not use times, THEN each VU issues a request as soon as the
>> > previous one is finished.
>> > So, when response times increase, the throughput drops.
>> > For example, if a request takes 1 second, one VU  will issue 60
>> > request per minute.
>> > When a request takes 2 second, one VU will issue 30 request per
>> > minute. So you have to increase the number of VU to compensate.
>> >
>> > IN order to overcome this problem, the best option IMHO is to use a
>> > "Constant Throughput Timer".
>> > This allows you to make the VU wait and not to exceed a specified
>> > throughput.
>> > So in the example above, if you insert a time limiting the throughput
>> > to 30 op /s, you will not exceed this throughput.
>> > Even if the actual request will take 0.5, or 1 or 1.8 seconds, the
>> > throughput will not change.
>> > You can adjust the number of VU in order to obtain the desired load.
>> > Obviously, if the request takes more than 2 seconds, , the throughput
>> > will drop.
>> >
>> > HTH
>> > Sergio
>> >
>> > Il 07/01/2019 18:52, Marcio Prado ha scritto:
>> >> Good afternoon,
>> >>
>> >> I have a question that can be simple for most.
>> >>
>> >> I'm running some tests on a cloud computing with OpenStack.
>> >>
>> >> When network latency increases, the number of HTTP requests met
>> >> decreases. Is this normal behavior?
>> >>
>> >> I figured that the number of HTTP requests would remain the same as
>> >> when latency was low, since HTTP requests are generated regardless of
>> >> response time.
>> >>
>> >> Can anyone explain this behavior?
>> >>
>> >> Thank you!
>> >>
>>
>> --
>> Marcio Prado
>> Analista de TI - Infraestrutura e Redes
>> Fone: (35) 9.9821-3561
>> www.marcioprado.eti.br
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>

--
Marcio Prado
Analista de TI - Infraestrutura e Redes
Fone: (35) 9.9821-3561
www.marcioprado.eti.br

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]