Icon View Thread

The following is the text of the current message along with any replies.
Messages 21 to 30 of 31 total
Thread Faster Way to Retrieve Records?
Fri, May 8 2015 9:46 AMPermanent Link

Raul

Team Elevate Team Elevate

On 5/8/2015 2:29 AM, Roy Lambert wrote:
>> So decrypting the data took 32x longer than actually retrieving and sending the data over the network to the client??? From my experience encryption usually adds 20% to the overall query time. Is there something odd with Stored Procedures that increases the decryption time so much?
>
> Total guess here but my take would be its a mixture of transport and decryption times. Still seems high though

This is not entirely applicable to you (f/s vs c/s) but we ran into
something similar with DBISAM a while ago.

The DBISAM/EDB crypto is block based so slowest scenario tends to be one
with lot of data in single request where the connection and query
prepare/open happens once and then it's just one big data-pump.

In our case few things helped :
- use larger RemoteReadSize
- remove fields you don't need from select
- enabling compression helped a lot (it happens before crypto so actual
data that needs encryption is smaller)
- Tim did implement some optimizations for Blowfish routines (mostly
inlining and such - i believe those are all part of shipping dbisam now)

In our case we did end with approx 20% cost for enabling encryption.

I know above does not entirely apply to you but maybe parts of it
provide some ideas to try.

Raul
Fri, May 8 2015 5:07 PMPermanent Link

Barry

Raul,

I'm wondering if the Stored Procedure is re-initializing the Blowfish algorithm for each row that is sent to the client? If Steve has the EDB server source code and a profiler like aQTime or any of the free ones (SamplingProfiler, gpProfile, asmProfiler), he can monitor the server code to find the bottleneck. He can try the same on his client code too.

Barry
Sat, May 9 2015 2:22 AMPermanent Link

Steve Gill

Avatar

<< Total guess here but my take would be its a mixture of transport and decryption times. Still seems high though >>

I agree, the difference is very surprising. I didn't think the performance hit would be that high.

= Steve
Sat, May 9 2015 2:22 AMPermanent Link

Steve Gill

Avatar

<< I'm wondering if the Stored Procedure is re-initializing the Blowfish algorithm for each row that is sent to the client? If Steve has the EDB server source code and a profiler like aQTime or any of the free ones (SamplingProfiler, gpProfile, asmProfiler), he can monitor the server code to find the bottleneck. He can try the same on his client code too. >>

Tim ran it through a profiler, which is how he found out the source of the problem.  I think he would be the best one to answer why it happens.

= Steve
Sat, May 9 2015 3:06 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Steve


This is interesting. I just realised that the email table that's exhibiting slow down is encrypted. I'll take it off next week and see what happens.

Roy Lambert
Sat, May 9 2015 11:24 AMPermanent Link

Raul

Team Elevate Team Elevate

On 5/9/2015 2:22 AM, Steve Gill wrote:
> << Total guess here but my take would be its a mixture of transport and decryption times. Still seems high though >>
>
> I agree, the difference is very surprising. I didn't think the performance hit would be that high.

The encryption/decryption is currently based on 8 byte block cipher so
depending the size of the data there might be fair bit of encryption
activity.

Mostly theoretical but say you request a recordset with non-trivial
record size. Let's just say each row is approx 1KB in size  : this means
we need 128 trips to the encryption routines and then you still need to
repeat for every row in recordset. Blowfish is very fast but it's not 0.
This can add up quickly (compared to no encryption at all scenario).
Doing things like compression and minimizing data really helps as might
some server side caching so doing some test timing with your queries and
such is really useful before you turn encryption on.

The other thing is that we have 2 levels here - you can encrypt the
table and also communication. If you do both then you might end up going
thru thru multiple cycles of this (for c/s at least) : decrypt from
disk, encrypt for transport and finally decrypt in receiving end.

In our case we are C/S so we usually do comms encryption only but this
depends on your requirements and how critical data is and secure you
consider server to be.

Raul
Sat, May 9 2015 1:38 PMPermanent Link

Barry

Raul,

>The encryption/decryption is currently based on 8 byte block cipher so
depending the size of the data there might be fair bit of encryption
activity.<

I was under the impression that Steve only had this problem when using Stored Procedures.
What happens if Steve executes a normal SQL query from his client application that returns a similar number of rows and columns from the same (or similar) table that the SP accessed?

He could for example build a test table from the results of what the stored procedure returns, making sure the table is encrypted on disk. Then restart the EDB server to flush any cached data and see if retrieving the data from the test table is any slower than the stored procedure.

I'm betting the query using the test table will be retrieved much faster than the data returned from the Stored Procedure.

Barry
Mon, May 11 2015 11:49 PMPermanent Link

Steve Gill

Avatar

Hi Barry,

<< I was under the impression that Steve only had this problem when using Stored Procedures.
What happens if Steve executes a normal SQL query from his client application that returns a similar number of rows and columns from the same (or similar) table that the SP accessed?  >>

You are correct. I tried a query using the same SQL (minus all of the stored proc stuff, of course) on the same table and it was much faster.

= Steve
Fri, May 15 2015 11:13 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Steve,

<< You are correct. I tried a query using the same SQL (minus all of the
stored proc stuff, of course) on the same table and it was much faster. >>

I went back and tried to replicate this, and you are correct, but *only* if
you turn *off* the RequestSensitive property for the query.

Why is this ?  Because in such a case you're not reading the rows from the
source table during navigation, you're reading them from the result set of
the query, of which *most* of the rows remain buffered in memory and, as
such, don't need to be written to, or read from, the disk.  Therefore, they
don't need to be encrypted/decrypted at all.

So, you can achieve parity by making sure that the result set sensitivity is
equivalent in both cases.

Tim Young
Elevate Software
www.elevatesoft.com
Fri, May 15 2015 11:23 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Barry,

<< I'm wondering if the Stored Procedure is re-initializing the Blowfish
algorithm for each row that is sent to the client? >>

No.

<< If Steve has the EDB server source code and a profiler like aQTime or any
of the free ones (SamplingProfiler, gpProfile, asmProfiler), he can monitor
the server code to find the bottleneck. He can try the same on his client
code too. >>

If someone has a support issue, they should contact us like Steve did, not
waste their time trying to diagnose it on their own.

Tim Young
Elevate Software
www.elevatesoft.com
« Previous PagePage 3 of 4Next Page »
Jump to Page:  1 2 3 4
Image