[Lazarus] data matrix with thousands of columns
Leonardo M. Ramé
l.rame at griensu.com
Tue Mar 26 19:07:29 CET 2013
On 2013-03-26 15:38:57 +0100, Andrea Mauri wrote:
> Dear all,
> I am looking for the best option in order to store big datasets with
> thousands of columns.
> The dataset can contains from tens to hundred thousands lines and
> thousand of columns (some columns are string some numbers).
> Which is the best option to store and retrieve information from a
> dataset like this?
> Actually I am using sqlite, tens of tables including maximum 200
> columns. But I am not sure this is the best option.
> Within my application I have to query this dataset:
> - retrieve a particular line (all or some columns);
> - retrieve a particular column (all or some lines);
> - order the dataset with respect to a particular column;
> - delete/add line(s);
> - delete/add column(s);
> ...
>
> SQLIte is easy to use when I need to query the dataset but I am not
> sure that is the most suitable.
>
> Any hint?
> Andrea Mauri
>
I think this question is more related to databases than Lazarus/FPC.
If you think you'll be adding columns regularly, one solution that comes
to my mind is this:
1) Create a "data_columns" table:
create table data_columns(
idcolumn integer,
column_name varchar(20),
primary key(idcolumn));
2) Create the "data_rows" table:
create table data_rows(
idrow integer,
idcolumn integer,
value varchar(100),
primary key(idrow)
);
alter table data_rows add constraint fk_idcolum foreign key(idcolumn)
references data_columns(idcolumn);
3) Then you can create the columns and add data to the "data_rows"
database referencin to which column this "cell" is related.
4) To get the "spreadsheet", please read this:
http://stackoverflow.com/questions/3002499/postgresql-crosstab-query
Regards,
--
Leonardo M. Ramé
http://leonardorame.blogspot.com
More information about the Lazarus
mailing list