Mailing List Archive

Session duplicate key constraints on concurrent requests
Dear all,

I've written about this issue a couple of times in the past and it
seems that this still hasn't been fixed. Here's what's happening:

1. Request A comes in with an expired session cookie, C::P::Session
tries to find the session for the given cookie but finds nothing.
2. Meanwhile, Request B comes in, also trying to find the session for
the same(!) cookie and goes away with empty hands as well.
3. Both requests try to insert a new session, one succeeds, the other
dies(!) with a duplicate key constraint error from MySQL.

If this would happen once in a while, I'd be okay with it, but it
happens every 3-5 minutes. We are using the latest versions of
everything - except DBIx::Class which is still on 0.08107 - and the
sessions table is InnoDB with big enough columns.

Now, in this day and age concurrent requests are not that seldom. The
POD of C::P::Session even describes this race-condition but the author
assumes that the session is just silently overwritten which isn't the
case when using a database backend with key constraints. Imagine a
page loading some additional content via AJAX or some web-bug pixels
and you have exactly the scenario described above. Is it really
desirable that the App dies on the user when this happens? Shouldn't
we handle this in a more sane way, maybe by just ignoring the error
because it operates on the same session anyways?

I've put together a little test app and an AnyEvent-based script to
run concurrent requests against the App which should illustrate the
problem. I'd appreciate it if you folks could check it out and tell me
if you're seeing the problem as well:

http://www.funkreich.de/crash_sessions.tar.bz2

Of course the App should be started with a forking server to allow
concurrent requests, for example:

CATALYST_DEBUG=0 CATALYST_ENGINE="HTTP::Prefork" perl
script/crashsession_server.pl

Afterwards you can run the testing script "crash_sessions.pl" which
should give you an HTTP 500 error after you've run it a couple of
times. A look at the App server output will hopefully give you the
duplicate key constraint error I'm talking about :)

Any ideas on how best to go about fixing this?

Thanks a lot!

--Toby

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Friday 07 October 2011 14:48:14 Tobias Kremer wrote:

> I've written about this issue a couple of times in the past and it
> seems that this still hasn't been fixed. Here's what's happening:
>
> 1. Request A comes in with an expired session cookie, C::P::Session
> tries to find the session for the given cookie but finds nothing.
> 2. Meanwhile, Request B comes in, also trying to find the session for
> the same(!) cookie and goes away with empty hands as well.
> 3. Both requests try to insert a new session, one succeeds, the other
> dies(!) with a duplicate key constraint error from MySQL.

How will the session key calculated? Any idea? Randomly? So two random
processes will calculate the same session value?

--
So long... Erik


_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Friday 07 October 2011 14:48:14 Tobias Kremer wrote:

> 3. Both requests try to insert a new session, one succeeds, the other
> dies(!) with a duplicate key constraint error from MySQL.

Sounds like this should be changed from "insert" to "insert_or_update"
which is wrapped within a transaction with "serializable" transaction
isolation level (because we might not know if the "insert_or_update"
method is atomic)... or something along those lines without looking
at the code.

Or alternatively the failed read of old cookie and the insert of the
new cookie could be wrapped in a single "serializable" transaction.

--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Fri, 7 Oct 2011, Erik Wasser wrote:

> How will the session key calculated? Any idea? Randomly? So two random
> processes will calculate the same session value?

It is still the same session cookie as before, but it has already
expired from the database? Thus both sessions try to re-insert it
simultaneously which leads to failure?

As long as both application instances are talking to the same DB
server (thus you do not use "random" as the distribution method in
your load balancer, but base it on IP or something) "serializable"
transaction solves this. If you have replication and you are
replicating the state tables... you might end up with broken
replication (if the collision happens in replication). I am quite
sure many other web applications are vulnerable to this as well.

I think the proper way to solve it is to drop the constraint on the
cookie and just insert the cookie and have an auto_increment ID in
the table. And when reading, select the cookie with the highest ID
(because there might be several).

--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Fri, 7 Oct 2011, Janne Snabb wrote:

> I think the proper way to solve it is to drop the constraint on the
> cookie and just insert the cookie and have an auto_increment ID in
> the table. And when reading, select the cookie with the highest ID
> (because there might be several).

Something like this perhaps? Untested code.

--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/

diff -U5 -r Catalyst-Plugin-Session-Store-DBI-0.16/lib/Catalyst/Plugin/Session/Store/DBI.pm Catalyst-Plugin-Session-Store-DBI-0.16+sessionpatch//lib/Catalyst/Plugin/Session/Store/DBI.pm
--- Catalyst-Plugin-Session-Store-DBI-0.16/lib/Catalyst/Plugin/Session/Store/DBI.pm 2010-03-24 04:47:13.000000000 +0700
+++ Catalyst-Plugin-Session-Store-DBI-0.16+sessionpatch//lib/Catalyst/Plugin/Session/Store/DBI.pm 2011-10-08 01:13:30.183410460 +0700
@@ -120,10 +120,14 @@

sub session_store_dbi_id_field {
return shift->_session_plugin_config->{'dbi_id_field'} || 'id';
}

+sub session_store_dbi_selector_field {
+ return shift->_session_plugin_config->{'dbi_selector_field'} || 'selector';
+}
+
sub session_store_dbi_data_field {
return shift->_session_plugin_config->{'dbi_data_field'} || 'session_data';
}

sub session_store_dbi_expires_field {
@@ -148,11 +152,11 @@
my ( $table, $id_field, $data_field, $expires_field ) =
map { $c->${\"session_store_$_"} }
qw/dbi_table dbi_id_field dbi_data_field dbi_expires_field/;
$c->_session_sql( {
get_session_data =>
- "SELECT $data_field FROM $table WHERE $id_field = ?",
+ "SELECT $data_field FROM $table WHERE $id_field = ? ORDER BY selector ASC LIMIT 1",
get_expires =>
"SELECT $expires_field FROM $table WHERE $id_field = ?",
check_existing =>
"SELECT 1 FROM $table WHERE $id_field = ?",
update_session =>
@@ -338,11 +342,12 @@

=head1 SYNOPSIS

# Create a table in your database for sessions
CREATE TABLE sessions (
- id char(72) primary key,
+ id char(72) key,
+ selector int auto_increment primary key,
session_data text,
expires int(10)
);

# In your app
@@ -354,20 +359,22 @@
dbi_dsn => 'dbi:mysql:database',
dbi_user => 'foo',
dbi_pass => 'bar',
dbi_table => 'sessions',
dbi_id_field => 'id',
+ dbi_selector_field => 'selector',
dbi_data_field => 'session_data',
dbi_expires_field => 'expires',
});

# Or use an existing database handle from a DBIC/CDBI class
MyApp->config('Plugin::Session' => {
expires => 3600,
dbi_dbh => 'DBIC', # which means MyApp::Model::DBIC
dbi_table => 'sessions',
dbi_id_field => 'id',
+ dbi_selector_field => 'selector',
dbi_data_field => 'session_data',
dbi_expires_field => 'expires',
});

# ... in an action:
@@ -423,10 +430,17 @@
=head2 dbi_id_field

The name of the field on your sessions table which stores the session ID.
Defaults to C<id>.

+=head2 dbi_selector_field
+
+The name of the field on your sessions table which is used to differentiate
+amongst multiple instances of the id field (which happens when multiple
+instances try to store the same session at the same time).
+Defaults to C<selector>.
+
=head2 dbi_data_field

The name of the field on your sessions table which stores session data.
Defaults to C<session_data>.

@@ -435,13 +449,14 @@
The name of the field on your sessions table which stores the expiration
time of the session. Defaults to C<expires>.

=head1 SCHEMA

-Your 'sessions' table must contain at minimum the following 3 columns:
+Your 'sessions' table must contain at minimum the following 4 columns:

- id char(72) primary key
+ id char(72) key
+ selector int auto_increment primary key
session_data text
expires int(10)

The 'id' column should probably be 72 characters. It needs to handle the
longest string that can be returned by

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Fri, 7 Oct 2011, Janne Snabb wrote:

> Something like this perhaps? Untested code.

Sorry about flooding. This is another much simpler solution but works
only on MySQL. I think there is no standard SQL syntax to accomplish
the same without extra DB fields.

--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/


diff -U5 -r Catalyst-Plugin-Session-Store-DBI-0.16/lib/Catalyst/Plugin/Session/Store/DBI.pm Catalyst-Plugin-Session-Store-DBI-0.16+otherpatch/lib/Catalyst/Plugin/Session/Store/DBI.pm
--- Catalyst-Plugin-Session-Store-DBI-0.16/lib/Catalyst/Plugin/Session/Store/DBI.pm 2010-03-24 04:47:13.000000000 +0700
+++ Catalyst-Plugin-Session-Store-DBI-0.16+otherpatch/lib/Catalyst/Plugin/Session/Store/DBI.pm 2011-10-08 02:21:12.227212244 +0700
@@ -156,11 +156,11 @@
check_existing =>
"SELECT 1 FROM $table WHERE $id_field = ?",
update_session =>
"UPDATE $table SET $data_field = ?, $expires_field = ? WHERE $id_field = ?",
insert_session =>
- "INSERT INTO $table ($data_field, $expires_field, $id_field) VALUES (?, ?, ?)",
+ "INSERT IGNORE INTO $table ($data_field, $expires_field, $id_field) VALUES (?, ?, ?)",
update_expires =>
"UPDATE $table SET $expires_field = ? WHERE $id_field = ?",
delete_session =>
"DELETE FROM $table WHERE $id_field = ?",
delete_expired_sessions =>

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
Hi Janne,

I appreciate your taking the time to look into this. Unfortunately your patches are for the Store::DBI backend, whereas I'm using Store::DBIC (DBIx::Class). The INSERT IGNORE solution is probably exactly what this case needs, but as you said, it's MySQL-specific and thus not suitable for the masses.

I suppose to really solve this for most databases, we'd first need a portable way to check if a database error is a duplicate key constraint error. This should probably be abstracted away in DBIx::Class.

Here's hoping that somebody comes up with a simpler solution ... My only alternative right now would be switching to a Memcached session backend.

--Toby



On 07.10.2011, at 21:25, Janne Snabb wrote:

> On Fri, 7 Oct 2011, Janne Snabb wrote:
>
>> Something like this perhaps? Untested code.
>
> Sorry about flooding. This is another much simpler solution but works
> only on MySQL. I think there is no standard SQL syntax to accomplish
> the same without extra DB fields.
>
> --
> Janne Snabb / EPIPE Communications
> snabb@epipe.com - http://epipe.com/
>
>
> diff -U5 -r Catalyst-Plugin-Session-Store-DBI-0.16/lib/Catalyst/Plugin/Session/Store/DBI.pm Catalyst-Plugin-Session-Store-DBI-0.16+otherpatch/lib/Catalyst/Plugin/Session/Store/DBI.pm
> --- Catalyst-Plugin-Session-Store-DBI-0.16/lib/Catalyst/Plugin/Session/Store/DBI.pm 2010-03-24 04:47:13.000000000 +0700
> +++ Catalyst-Plugin-Session-Store-DBI-0.16+otherpatch/lib/Catalyst/Plugin/Session/Store/DBI.pm 2011-10-08 02:21:12.227212244 +0700
> @@ -156,11 +156,11 @@
> check_existing =>
> "SELECT 1 FROM $table WHERE $id_field = ?",
> update_session =>
> "UPDATE $table SET $data_field = ?, $expires_field = ? WHERE $id_field = ?",
> insert_session =>
> - "INSERT INTO $table ($data_field, $expires_field, $id_field) VALUES (?, ?, ?)",
> + "INSERT IGNORE INTO $table ($data_field, $expires_field, $id_field) VALUES (?, ?, ?)",
> update_expires =>
> "UPDATE $table SET $expires_field = ? WHERE $id_field = ?",
> delete_session =>
> "DELETE FROM $table WHERE $id_field = ?",
> delete_expired_sessions =>
>
> _______________________________________________
> List: Catalyst@lists.scsys.co.uk
> Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
> Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
> Dev site: http://dev.catalyst.perl.org/


_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
* Tobias Kremer <tobias.kremer@gmail.com> [2011-10-07 15:00]:
> I've written about this issue a couple of times in the past and it
> seems that this still hasn't been fixed.

Maybe the answer is mu.

Why use a session at all?

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Fri, 7 Oct 2011, Tobias Kremer wrote:

> I appreciate your taking the time to look into this. Unfortunately
> your patches are for the Store::DBI backend, whereas I'm using
> Store::DBIC (DBIx::Class).

So, the code you are talking about is then
Catalyst/Plugin/Session/Store/DBIC/Delegate.pm? It currently has
the following in _load_row():

my $load_sub = sub {
return $self->model->find_or_create({ $self->id_field => $key })
};

my $row;
if (blessed $self->model and $self->model->can('result_source')) {
$row = $self->model->result_source->schema->txn_do($load_sub);
}

The code is clearly incorrect. The person writing it probably just
thought "oh well, I'll wrap it in transaction -- it probably helps
here". Even if you have the highest transaction isolation level
"serializable", you will not lock on SELECTs which return no rows.

It is trivial to fix in some ways that I mentioned:

#1. Drop the unique constraint on id_field and add selector_field
which is auto_increment in the table (see my earlier post for more
details) and remove the unneeded transaction:

my $row;
if (blessed $self->model and $self->model->can('result_source')) {
$row = $self->model->find({ $self->id_field => $key },
{ order_by => { '-asc' => $self->selector_field },
rows => 1 })
or $self->model->create({ $self->id_field => $key });
}

Note that this requires support for auto_increment/serial type in
the underlying DB. This is the only somewhat generic solution which
is safe with MySQL multi-master asynchronous replication (where you
might be writing to any of several replicants). That is why this
is my generic recommendation.

#2. Implement find_or_create_atomic() in DBIC:

my $row;
if (blessed $self->model and $self->model->can('result_source')) {
$row = $self->model->find_or_create_atomic({ $self->id_field => $key });
}

#3. Implement "LOCK TABLES" in DBIC and use locks to protect your
critical section:

my $row;
if (blessed $self->model and $self->model->can('result_source')) {
$self->model->rwlock();
$row = $self->model->find_or_create({ $self->id_field => $key });
$self->model->unlock();
}

#4. Catch the duplicate key exception in application code:

my $row;
if (blessed $self->model and $self->model->can('result_source')) {
$row = $self->model->find({ $self->id_field => $key })
or eval {
$self->model->create({ $self->id_field => $key });
} or
$self->model->find({ $self->id_field => $key });
}

(Add the code to get the sqlstate from the DBI driver and check if
the error was integrity constraint violation ($sqlstate =~ /^23/)
if desired.)


Hope this helps,
--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Sat, 8 Oct 2011, Janne Snabb wrote:

> return $self->model->find_or_create({ $self->id_field => $key })

Just an update:

I discussed about this with mst on irc and we concluded that my
initial suggestion for the fix is also not correct. It is likely
to eliminate the SQL error but also likely to cause spurious behaviour
in the upper layers.

The upper layer (which I have not looked at, is there a call/inheritance
graph of Catalyst available somewhere? :) should get notified that
the old session is gone, thus we can not just autovivify it in the
storage with the old id. Rather a new id should be issued so that
the upper layers will have a chance of noticing what happened. I
think that would also eliminate the collisions (as the new id is a
new random string).

*::Session::Storage::Cache::* is presumambly a better choice than
DBI/DBIC in the usual case.

--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
Here's an interesting fact: Whenever I hit MyApp with an invalid
session id (i.e. one, that isn't in the store), the plugin tries to
insert this exact session id. If two concurrent requests with an
invalid session come in, they both try to (re-)create this invalid
session. This is due to the use of find_or_create(). But shouldn't the
correct approach be:

a) Try to find the session, e.g. $self->model->find({ $self->id_field => $key })
b) If found, use this session because it's still valid.
c) If not found, create() an entirely NEW(!) session, with a new
session-ID and insert it.

Why on earth would we want to insert an invalid session into the
database, just to delete it afterwards because it was invalid and then
create a fresh session? Here's an SQL log that shows what's going on:

BEGIN WORK
BEGIN WORK
SELECT me.id, me.a_session, me.expires FROM sessions2 me WHERE ( me.id
= ? ): 'session:invalid'
SELECT me.id, me.a_session, me.expires FROM sessions2 me WHERE ( me.id
= ? ): 'session:invalid'
INSERT INTO sessions2 ( id) VALUES ( ? ): 'session:invalid'
INSERT INTO sessions2 ( id) VALUES ( ? ): 'session:invalid'
COMMIT
DELETE FROM sessions2 WHERE ( id = ? ): 'session:invalid'
ROLLBACK
DELETE FROM sessions2 WHERE ( id = ? ): 'flash:invalid'
BEGIN WORK
SELECT me.id, me.a_session, me.expires FROM sessions2 me WHERE ( me.id
= ? ): 'session:new-session'
INSERT INTO sessions2 ( id) VALUES ( ? ): 'session:new-session'
COMMIT
UPDATE sessions2 SET a_session = ?, expires = ? WHERE ( id = ? ):
'foo', '1318239160', 'session:new-session'

So, isn't the use of find_or_create() just plain wrong or am I seeing
things here? :)

--Toby




On Sat, Oct 8, 2011 at 2:23 PM, Janne Snabb <snabb@epipe.com> wrote:
> On Sat, 8 Oct 2011, Janne Snabb wrote:
>
>>     return $self->model->find_or_create({ $self->id_field => $key })
>
> Just an update:
>
> I discussed about this with mst on irc and we concluded that my
> initial suggestion for the fix is also not correct. It is likely
> to eliminate the SQL error but also likely to cause spurious behaviour
> in the upper layers.
>
> The upper layer (which I have not looked at, is there a call/inheritance
> graph of Catalyst available somewhere? :) should get notified that
> the old session is gone, thus we can not just autovivify it in the
> storage with the old id. Rather a new id should be issued so that
> the upper layers will have a chance of noticing what happened. I
> think that would also eliminate the collisions (as the new id is a
> new random string).
>
> *::Session::Storage::Cache::* is presumambly a better choice than
> DBI/DBIC in the usual case.
>
> --
> Janne Snabb / EPIPE Communications
> snabb@epipe.com - http://epipe.com/
>
> _______________________________________________
> List: Catalyst@lists.scsys.co.uk
> Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
> Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
> Dev site: http://dev.catalyst.perl.org/
>

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Mon, 10 Oct 2011, Tobias Kremer wrote:

> So, isn't the use of find_or_create() just plain wrong or am I seeing
> things here? :)

I have been thinking also that the correct solution might be as
simple as to just replace find_or_create() with find(), but I have
not managed to have a look at upper layers how exactly the session
store methods are being invoked... thus I am not sure.

In the case of 2 simultaneous connections with an expired session
cookie (which is now triggering the SQL constraint issue), both
connections would get a new session, but the browser would get to
decide which one it will hold on to, and which one will be forgotten
and eventually also expired from the server. I think the browser
will hold on to the session that it receives later.

--
Janne Snabb / EPIPE Communications
snabb@epipe.com - http://epipe.com/

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
Hi,

Am 10.10.2011 um 10:42 schrieb Tobias Kremer:

> a) Try to find the session, e.g. $self->model->find({ $self->id_field => $key })
> b) If found, use this session because it's still valid.
> c) If not found, create() an entirely NEW(!) session, with a new
> session-ID and insert it.

wouldn't that result in two new sessions? Your first request would create "session:new1" and the second "session:new2", so you'll end up loosing info from "session:new1".

Matthias

--
rainboxx Software Engineering
Matthias Dietrich

rainboxx Matthias Dietrich | Phone: +49 7141 / 2 39 14 71
Königsallee 43 | Fax : +49 3222 / 1 47 63 00
71638 Ludwigsburg | Mobil: +49 151 / 50 60 78 64
| WWW : http://www.rainboxx.de

CPAN: http://search.cpan.org/~mdietrich/
XING: https://www.xing.com/profile/Matthias_Dietrich18
GULP: http://www.gulp.de/profil/rainboxx.html





_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On Mon, Oct 10, 2011 at 2:26 PM, Matthias Dietrich <mdietrich@cpan.org> wrote:
> wouldn't that result in two new sessions?  Your first request would create "session:new1" and the second "session:new2", so you'll end up loosing info from "session:new1".

Yes, but does that really matter? If you're using Store::Memcached for
example, one session would overwrite the other's data, thus you'll
loose data anyways. There's no perfect solution to this problem, I
guess :)

On Mon, Oct 10, 2011 at 1:53 PM, Janne Snabb <snabb@epipe.com> wrote:
> I think the browser will hold on to the session that it receives later.

That's exactly what would happen, because the browser will store only
the last cookie it received.

Due to the few responses we've received so far, I take it that
nobody's really using the DBIC backend in a medium-sized app? What's
your favorite session backend (that works in a load-balanced
environment) and handles quite a lot of traffic? :)

Thanks!

--Toby

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
I just tried Session::Store::DBI and guess what: It does exactly what
I suggested in my previous e-mail and doesn't cause any duplicate key
constraint errors (even with 50 concurrent requests):

[debug] Found sessionid "invalid-session-id" in cookie
[debug] Deleting session(session expired)
[debug] Created session "new-session-id"

This leads me to believe that Session::Store::DBIC is simply broken!

I'll try switching to Store::DBI for now and see if that solves the
problem in production.

--Toby



On Mon, Oct 10, 2011 at 2:56 PM, Tobias Kremer <tobias.kremer@gmail.com> wrote:
> On Mon, Oct 10, 2011 at 2:26 PM, Matthias Dietrich <mdietrich@cpan.org> wrote:
>> wouldn't that result in two new sessions?  Your first request would create "session:new1" and the second "session:new2", so you'll end up loosing info from "session:new1".
>
> Yes, but does that really matter? If you're using Store::Memcached for
> example, one session would overwrite the other's data, thus you'll
> loose data anyways. There's no perfect solution to this problem, I
> guess :)
>
> On Mon, Oct 10, 2011 at 1:53 PM, Janne Snabb <snabb@epipe.com> wrote:
>> I think the browser will hold on to the session that it receives later.
>
> That's exactly what would happen, because the browser will store only
> the last cookie it received.
>
> Due to the few responses we've received so far, I take it that
> nobody's really using the DBIC backend in a medium-sized app? What's
> your favorite session backend (that works in a load-balanced
> environment) and handles quite a lot of traffic? :)

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
Am 10.10.2011 um 17:00 schrieb Tobias Kremer:

>> Your right ;-).
>
> And that's almost as bad as "would of been" ... ;-)

*tired
$son is consuming too much time during the nights ;-).


Back on topic:

Am 10.10.2011 um 14:56 schrieb Tobias Kremer:

> Yes, but does that really matter? If you're using Store::Memcached for
> example, one session would overwrite the other's data, thus you'll
> loose data anyways. There's no perfect solution to this problem, I
> guess :)

That depends. At least you'll lose(!) another request's data. But then you'd lose the data of the second request, which may not be better. So... I thing Store::DBI's way seems good.

Matthias


--
rainboxx Software Engineering
Matthias Dietrich

rainboxx Matthias Dietrich | Phone: +49 7141 / 2 39 14 71
Königsallee 43 | Fax : +49 3222 / 1 47 63 00
71638 Ludwigsburg | Mobil: +49 151 / 50 60 78 64
| WWW : http://www.rainboxx.de

CPAN: http://search.cpan.org/~mdietrich/
XING: https://www.xing.com/profile/Matthias_Dietrich18
GULP: http://www.gulp.de/profil/rainboxx.html





_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
On 10 October 2011 16:00, Tobias Kremer <tobias.kremer@gmail.com> wrote:
> On Mon, Oct 10, 2011 at 4:07 PM, Matthias Dietrich <mdietrich@cpan.org> wrote:
>> Am 10.10.2011 um 15:59 schrieb Denny:
>>> The word you both want is 'lose'.  Loose means something slightly different (and slightly odd, when discussing data).
>
> Absolutely! Sorry for the typo :)
>
>> Your right ;-).
>
> And that's almost as bad as "would of been" ... ;-)

Your quiet riot :)

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
Re: Session duplicate key constraints on concurrent requests [ In reply to ]
To get the discussion back on track: I switched from
Session::Store::DBIC to Session::Store::DBI on our production systems
a couple of hours ago and haven't had a single duplicate key
constraint error. Yay!

I'll file a bug report for C::P::Session::Store::DBIC.

Cheers!

--Toby



On Tue, Oct 11, 2011 at 11:22 AM, Will Crawford
<billcrawford1970@gmail.com> wrote:
> On 10 October 2011 16:00, Tobias Kremer <tobias.kremer@gmail.com> wrote:
>> On Mon, Oct 10, 2011 at 4:07 PM, Matthias Dietrich <mdietrich@cpan.org> wrote:
>>> Am 10.10.2011 um 15:59 schrieb Denny:
>>>> The word you both want is 'lose'.  Loose means something slightly different (and slightly odd, when discussing data).
>>
>> Absolutely! Sorry for the typo :)
>>
>>> Your right ;-).
>>
>> And that's almost as bad as "would of been" ... ;-)
>
> Your quiet riot :)
>
> _______________________________________________
> List: Catalyst@lists.scsys.co.uk
> Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
> Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
> Dev site: http://dev.catalyst.perl.org/
>

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/