Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
MGWhy is that important? Rows are in a table in no particular order. For it
to have meaning, the ROW_NUMBER() function has to have an ORDER BY
associated with it.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
MG|||Wrong newsgroup but you could try
SELECT *
FROM
(
SELECT row_number() OVER (ORDER BY column_your_dbtable_is_sortedby
asc) as rownum, id
FROM mytable
) a
WHERE id = '30532'
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG|||"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
Tables have no order so there is no such thing as a determinate row number.
"100th record" is arbitrary and meaningless unless you state some ordering
based on the data itself.
If you just want some arbitrary row to call the "100th" you can just do TOP
1:
SELECT TOP 1 col1, col2, col3
FROM mytable;
David Portas
Showing posts with label row. Show all posts
Showing posts with label row. Show all posts
Friday, March 23, 2012
finding a row number
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
MG
Why is that important? Rows are in a table in no particular order. For it
to have meaning, the ROW_NUMBER() function has to have an ORDER BY
associated with it.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
MG
|||Wrong newsgroup but you could try
SELECT *
FROM
(
SELECT row_number() OVER (ORDER BY column_your_dbtable_is_sortedby
asc) as rownum, id
FROM mytable
) a
WHERE id = '30532'
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
|||"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
Tables have no order so there is no such thing as a determinate row number.
"100th record" is arbitrary and meaningless unless you state some ordering
based on the data itself.
If you just want some arbitrary row to call the "100th" you can just do TOP
1:
SELECT TOP 1 col1, col2, col3
FROM mytable;
David Portas
sql
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
MG
Why is that important? Rows are in a table in no particular order. For it
to have meaning, the ROW_NUMBER() function has to have an ORDER BY
associated with it.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
MG
|||Wrong newsgroup but you could try
SELECT *
FROM
(
SELECT row_number() OVER (ORDER BY column_your_dbtable_is_sortedby
asc) as rownum, id
FROM mytable
) a
WHERE id = '30532'
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
|||"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
Tables have no order so there is no such thing as a determinate row number.
"100th record" is arbitrary and meaningless unless you state some ordering
based on the data itself.
If you just want some arbitrary row to call the "100th" you can just do TOP
1:
SELECT TOP 1 col1, col2, col3
FROM mytable;
David Portas
sql
finding a row number
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
--
MGWhy is that important? Rows are in a table in no particular order. For it
to have meaning, the ROW_NUMBER() function has to have an ORDER BY
associated with it.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
--
MG|||Wrong newsgroup but you could try
SELECT *
FROM
(
SELECT row_number() OVER (ORDER BY column_your_dbtable_is_sortedby
asc) as rownum, id
FROM mytable
) a
WHERE id = '30532'
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG|||"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
Tables have no order so there is no such thing as a determinate row number.
"100th record" is arbitrary and meaningless unless you state some ordering
based on the data itself.
If you just want some arbitrary row to call the "100th" you can just do TOP
1:
SELECT TOP 1 col1, col2, col3
FROM mytable;
--
David Portas
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
--
MGWhy is that important? Rows are in a table in no particular order. For it
to have meaning, the ROW_NUMBER() function has to have an ORDER BY
associated with it.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
Is there a function or something similar in sql 2005 that can tell you what
row a record is in a table without any indexes?
For example, if I tried to execute something like
select row_number() from mytable where id = '30532'
and the record were the 100th record, I'd get back a result of 100.
--
MG|||Wrong newsgroup but you could try
SELECT *
FROM
(
SELECT row_number() OVER (ORDER BY column_your_dbtable_is_sortedby
asc) as rownum, id
FROM mytable
) a
WHERE id = '30532'
"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG|||"Hurme" <michael.geles@.thomson.com> wrote in message
news:59CFB5F8-0709-4FB7-B178-4127F99A2FDC@.microsoft.com...
> Is there a function or something similar in sql 2005 that can tell you
> what
> row a record is in a table without any indexes?
> For example, if I tried to execute something like
> select row_number() from mytable where id = '30532'
> and the record were the 100th record, I'd get back a result of 100.
> --
> MG
Tables have no order so there is no such thing as a determinate row number.
"100th record" is arbitrary and meaningless unless you state some ordering
based on the data itself.
If you just want some arbitrary row to call the "100th" you can just do TOP
1:
SELECT TOP 1 col1, col2, col3
FROM mytable;
--
David Portas
Wednesday, March 21, 2012
Find the size of an index row
Hey Guys,
I would like to enumerate all the indexes in a table and find the size of
the index, when I mean the size not the total space occupied but the size of
each row of the index. Eg: Table EMP has id bigint, name varchar(64),
address varchar(64), phone bigint, zip int, etc. Index 1 (id, name), Index
2(zip, phone), Index 3(phone) Each row in this table could occupy 8+64+64+8+4
bytes. So each index associated with each row would occupy 8+64 + 8+8 + 8.
That's what I want. This would be used in later calculating the required
size based on number of records to be inserted.
yodarules wrote:
> Hey Guys,
> I would like to enumerate all the indexes in a table and find the
> size of the index, when I mean the size not the total space occupied
> but the size of each row of the index. Eg: Table EMP has id bigint,
> name varchar(64), address varchar(64), phone bigint, zip int, etc.
> Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> this table could occupy 8+64+64+8+4 bytes. So each index associated
> with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> would be used in later calculating the required size based on number
> of records to be inserted.
See "Estimating the Size of a Table with a Clustered Index" in BOL.
David Gugick
Quest Software
www.quest.com
|||Thanks David,
But thats not what I want, as I have explained in my post earlier. This
tells you the method to calcualte the space, which is what I might do when I
try to achieve my final goal. But the immediate goal is to dynamically find
the size of each non-clustered index on that table. The reason why I want to
do it dynamically is that, if I add or remove indexes, my SP should be able
to work with that rather than a hard coded value.
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>
|||A couple of thoughts on your question.
1) SQL Server stores everything in 8KB pages and stores pages in 8 page
extents. Based on the fill factor and size of your index SQL Server will fit
as many rows from your index within the page as it can.
2) A non-clustered index always contains a reference to the key of the
clustered index or a pointer to a place in the heap if no clustered index
exists, in addition to the data in the non-clustered index. Additionally, a
varchar field does not always take up all 8 bytes. Varchars are variable
length and only stored with the bytes taken up by the value within the field.
So, your example is not actually accurate in how big a row of the index (and
it is further dependent on whether the index is clustered or non-clustered)
would be.
I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
will also show you if there is fragmentation in those pages and extents. In
SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
dm_db_index_physical_stats also provides values for min,avg and max record
size. Which might be what you are looking for. I don't know if you can find
that data for SQL 2000.
The article in BOL that David pointed you to is a good illustration of how
that calculation would be done manually for a particular table and its
indexes.
HTH
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>
|||I totally understand and am aware of the points mentioned. The main reason
why I want to do this to for size estimation. When we archive and delete
some records we store the number of rows deleted. When we reload the data
back, we want to do a space estimation and see if there's enough space to do
this operation. Its actaully much more complicated than what I have just
explained, but thats the idea. I don't want to do spaceused before and after
for various other reasons and this seems to be the best case for us. If I
cannot dynamically find index sizes, then I have to hard code it some where,
again we are happy with an approximation, hence don't really care about
varchar, etc. Actually the tables I'm interested in doing this don't have
varchar.
"Ryan Powers" wrote:
[vbcol=seagreen]
> A couple of thoughts on your question.
> 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> extents. Based on the fill factor and size of your index SQL Server will fit
> as many rows from your index within the page as it can.
> 2) A non-clustered index always contains a reference to the key of the
> clustered index or a pointer to a place in the heap if no clustered index
> exists, in addition to the data in the non-clustered index. Additionally, a
> varchar field does not always take up all 8 bytes. Varchars are variable
> length and only stored with the bytes taken up by the value within the field.
> So, your example is not actually accurate in how big a row of the index (and
> it is further dependent on whether the index is clustered or non-clustered)
> would be.
> I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
> will also show you if there is fragmentation in those pages and extents. In
> SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> dm_db_index_physical_stats also provides values for min,avg and max record
> size. Which might be what you are looking for. I don't know if you can find
> that data for SQL 2000.
> The article in BOL that David pointed you to is a good illustration of how
> that calculation would be done manually for a particular table and its
> indexes.
> HTH
> --
> Ryan Powers
> Clarity Consulting
> http://www.claritycon.com
>
> "David Gugick" wrote:
|||Okay.
So, it sounds like you are using sql 2000, and you are looking to
programmatically access the indexes, the columns that make up those indexes
and the size of the columns. So, you can estimate size. I could be jumping
to the wrong conclusions.
There are some undocumented SPs that could help you get there.
The following all seem to give you parts of what you want,
sp_MStablespace
sp_MSindexspace
sp_MShelpindex
sp_MShelpcolumns
sp_columns_rowset
For specifics about using any/ all of the above. Follow this link.
http://www.sql-server-performance.co...umented_sp.asp
Note that since these are undocumented, they won't necessarily work in SQL
2005.
HTH
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"yodarules" wrote:
[vbcol=seagreen]
> I totally understand and am aware of the points mentioned. The main reason
> why I want to do this to for size estimation. When we archive and delete
> some records we store the number of rows deleted. When we reload the data
> back, we want to do a space estimation and see if there's enough space to do
> this operation. Its actaully much more complicated than what I have just
> explained, but thats the idea. I don't want to do spaceused before and after
> for various other reasons and this seems to be the best case for us. If I
> cannot dynamically find index sizes, then I have to hard code it some where,
> again we are happy with an approximation, hence don't really care about
> varchar, etc. Actually the tables I'm interested in doing this don't have
> varchar.
> "Ryan Powers" wrote:
I would like to enumerate all the indexes in a table and find the size of
the index, when I mean the size not the total space occupied but the size of
each row of the index. Eg: Table EMP has id bigint, name varchar(64),
address varchar(64), phone bigint, zip int, etc. Index 1 (id, name), Index
2(zip, phone), Index 3(phone) Each row in this table could occupy 8+64+64+8+4
bytes. So each index associated with each row would occupy 8+64 + 8+8 + 8.
That's what I want. This would be used in later calculating the required
size based on number of records to be inserted.
yodarules wrote:
> Hey Guys,
> I would like to enumerate all the indexes in a table and find the
> size of the index, when I mean the size not the total space occupied
> but the size of each row of the index. Eg: Table EMP has id bigint,
> name varchar(64), address varchar(64), phone bigint, zip int, etc.
> Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> this table could occupy 8+64+64+8+4 bytes. So each index associated
> with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> would be used in later calculating the required size based on number
> of records to be inserted.
See "Estimating the Size of a Table with a Clustered Index" in BOL.
David Gugick
Quest Software
www.quest.com
|||Thanks David,
But thats not what I want, as I have explained in my post earlier. This
tells you the method to calcualte the space, which is what I might do when I
try to achieve my final goal. But the immediate goal is to dynamically find
the size of each non-clustered index on that table. The reason why I want to
do it dynamically is that, if I add or remove indexes, my SP should be able
to work with that rather than a hard coded value.
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>
|||A couple of thoughts on your question.
1) SQL Server stores everything in 8KB pages and stores pages in 8 page
extents. Based on the fill factor and size of your index SQL Server will fit
as many rows from your index within the page as it can.
2) A non-clustered index always contains a reference to the key of the
clustered index or a pointer to a place in the heap if no clustered index
exists, in addition to the data in the non-clustered index. Additionally, a
varchar field does not always take up all 8 bytes. Varchars are variable
length and only stored with the bytes taken up by the value within the field.
So, your example is not actually accurate in how big a row of the index (and
it is further dependent on whether the index is clustered or non-clustered)
would be.
I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
will also show you if there is fragmentation in those pages and extents. In
SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
dm_db_index_physical_stats also provides values for min,avg and max record
size. Which might be what you are looking for. I don't know if you can find
that data for SQL 2000.
The article in BOL that David pointed you to is a good illustration of how
that calculation would be done manually for a particular table and its
indexes.
HTH
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>
|||I totally understand and am aware of the points mentioned. The main reason
why I want to do this to for size estimation. When we archive and delete
some records we store the number of rows deleted. When we reload the data
back, we want to do a space estimation and see if there's enough space to do
this operation. Its actaully much more complicated than what I have just
explained, but thats the idea. I don't want to do spaceused before and after
for various other reasons and this seems to be the best case for us. If I
cannot dynamically find index sizes, then I have to hard code it some where,
again we are happy with an approximation, hence don't really care about
varchar, etc. Actually the tables I'm interested in doing this don't have
varchar.
"Ryan Powers" wrote:
[vbcol=seagreen]
> A couple of thoughts on your question.
> 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> extents. Based on the fill factor and size of your index SQL Server will fit
> as many rows from your index within the page as it can.
> 2) A non-clustered index always contains a reference to the key of the
> clustered index or a pointer to a place in the heap if no clustered index
> exists, in addition to the data in the non-clustered index. Additionally, a
> varchar field does not always take up all 8 bytes. Varchars are variable
> length and only stored with the bytes taken up by the value within the field.
> So, your example is not actually accurate in how big a row of the index (and
> it is further dependent on whether the index is clustered or non-clustered)
> would be.
> I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
> will also show you if there is fragmentation in those pages and extents. In
> SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> dm_db_index_physical_stats also provides values for min,avg and max record
> size. Which might be what you are looking for. I don't know if you can find
> that data for SQL 2000.
> The article in BOL that David pointed you to is a good illustration of how
> that calculation would be done manually for a particular table and its
> indexes.
> HTH
> --
> Ryan Powers
> Clarity Consulting
> http://www.claritycon.com
>
> "David Gugick" wrote:
|||Okay.
So, it sounds like you are using sql 2000, and you are looking to
programmatically access the indexes, the columns that make up those indexes
and the size of the columns. So, you can estimate size. I could be jumping
to the wrong conclusions.
There are some undocumented SPs that could help you get there.
The following all seem to give you parts of what you want,
sp_MStablespace
sp_MSindexspace
sp_MShelpindex
sp_MShelpcolumns
sp_columns_rowset
For specifics about using any/ all of the above. Follow this link.
http://www.sql-server-performance.co...umented_sp.asp
Note that since these are undocumented, they won't necessarily work in SQL
2005.
HTH
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"yodarules" wrote:
[vbcol=seagreen]
> I totally understand and am aware of the points mentioned. The main reason
> why I want to do this to for size estimation. When we archive and delete
> some records we store the number of rows deleted. When we reload the data
> back, we want to do a space estimation and see if there's enough space to do
> this operation. Its actaully much more complicated than what I have just
> explained, but thats the idea. I don't want to do spaceused before and after
> for various other reasons and this seems to be the best case for us. If I
> cannot dynamically find index sizes, then I have to hard code it some where,
> again we are happy with an approximation, hence don't really care about
> varchar, etc. Actually the tables I'm interested in doing this don't have
> varchar.
> "Ryan Powers" wrote:
Find the size of an index row
Hey Guys,
I would like to enumerate all the indexes in a table and find the size of
the index, when I mean the size not the total space occupied but the size of
each row of the index. Eg: Table EMP has id bigint, name varchar(64),
address varchar(64), phone bigint, zip int, etc. Index 1 (id, name), Index
2(zip, phone), Index 3(phone) Each row in this table could occupy 8+64+64+8+
4
bytes. So each index associated with each row would occupy 8+64 + 8+8 + 8.
That's what I want. This would be used in later calculating the required
size based on number of records to be inserted.yodarules wrote:
> Hey Guys,
> I would like to enumerate all the indexes in a table and find the
> size of the index, when I mean the size not the total space occupied
> but the size of each row of the index. Eg: Table EMP has id bigint,
> name varchar(64), address varchar(64), phone bigint, zip int, etc.
> Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> this table could occupy 8+64+64+8+4 bytes. So each index associated
> with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> would be used in later calculating the required size based on number
> of records to be inserted.
See "Estimating the Size of a Table with a Clustered Index" in BOL.
David Gugick
Quest Software
www.quest.com|||Thanks David,
But thats not what I want, as I have explained in my post earlier. This
tells you the method to calcualte the space, which is what I might do when I
try to achieve my final goal. But the immediate goal is to dynamically find
the size of each non-clustered index on that table. The reason why I want t
o
do it dynamically is that, if I add or remove indexes, my SP should be able
to work with that rather than a hard coded value.
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||A couple of thoughts on your question.
1) SQL Server stores everything in 8KB pages and stores pages in 8 page
extents. Based on the fill factor and size of your index SQL Server will fi
t
as many rows from your index within the page as it can.
2) A non-clustered index always contains a reference to the key of the
clustered index or a pointer to a place in the heap if no clustered index
exists, in addition to the data in the non-clustered index. Additionally, a
varchar field does not always take up all 8 bytes. Varchars are variable
length and only stored with the bytes taken up by the value within the field
.
So, your example is not actually accurate in how big a row of the index (and
it is further dependent on whether the index is clustered or non-clustered)
would be.
I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
will also show you if there is fragmentation in those pages and extents. In
SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
dm_db_index_physical_stats also provides values for min,avg and max record
size. Which might be what you are looking for. I don't know if you can fin
d
that data for SQL 2000.
The article in BOL that David pointed you to is a good illustration of how
that calculation would be done manually for a particular table and its
indexes.
HTH
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||I totally understand and am aware of the points mentioned. The main reason
why I want to do this to for size estimation. When we archive and delete
some records we store the number of rows deleted. When we reload the data
back, we want to do a space estimation and see if there's enough space to do
this operation. Its actaully much more complicated than what I have just
explained, but thats the idea. I don't want to do spaceused before and afte
r
for various other reasons and this seems to be the best case for us. If I
cannot dynamically find index sizes, then I have to hard code it some where,
again we are happy with an approximation, hence don't really care about
varchar, etc. Actually the tables I'm interested in doing this don't have
varchar.
"Ryan Powers" wrote:
[vbcol=seagreen]
> A couple of thoughts on your question.
> 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> extents. Based on the fill factor and size of your index SQL Server will
fit
> as many rows from your index within the page as it can.
> 2) A non-clustered index always contains a reference to the key of the
> clustered index or a pointer to a place in the heap if no clustered index
> exists, in addition to the data in the non-clustered index. Additionally,
a
> varchar field does not always take up all 8 bytes. Varchars are variable
> length and only stored with the bytes taken up by the value within the fie
ld.
> So, your example is not actually accurate in how big a row of the index (
and
> it is further dependent on whether the index is clustered or non-clustered
)
> would be.
> I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> use DBCC SHOWCONTIG to see how many pages and extents an index takes up.
It
> will also show you if there is fragmentation in those pages and extents.
In
> SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> dm_db_index_physical_stats also provides values for min,avg and max record
> size. Which might be what you are looking for. I don't know if you can f
ind
> that data for SQL 2000.
> The article in BOL that David pointed you to is a good illustration of how
> that calculation would be done manually for a particular table and its
> indexes.
> HTH
> --
> Ryan Powers
> Clarity Consulting
> http://www.claritycon.com
>
> "David Gugick" wrote:
>|||Okay.
So, it sounds like you are using sql 2000, and you are looking to
programmatically access the indexes, the columns that make up those indexes
and the size of the columns. So, you can estimate size. I could be jumping
to the wrong conclusions.
There are some undocumented SPs that could help you get there.
The following all seem to give you parts of what you want,
sp_MStablespace
sp_MSindexspace
sp_MShelpindex
sp_MShelpcolumns
sp_columns_rowset
For specifics about using any/ all of the above. Follow this link.
http://www.sql-server-performance.c...cumented_sp.asp
Note that since these are undocumented, they won't necessarily work in SQL
2005.
HTH
--
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"yodarules" wrote:
[vbcol=seagreen]
> I totally understand and am aware of the points mentioned. The main reaso
n
> why I want to do this to for size estimation. When we archive and delete
> some records we store the number of rows deleted. When we reload the data
> back, we want to do a space estimation and see if there's enough space to
do
> this operation. Its actaully much more complicated than what I have just
> explained, but thats the idea. I don't want to do spaceused before and af
ter
> for various other reasons and this seems to be the best case for us. If I
> cannot dynamically find index sizes, then I have to hard code it some wher
e,
> again we are happy with an approximation, hence don't really care about
> varchar, etc. Actually the tables I'm interested in doing this don't have
> varchar.
> "Ryan Powers" wrote:
>
I would like to enumerate all the indexes in a table and find the size of
the index, when I mean the size not the total space occupied but the size of
each row of the index. Eg: Table EMP has id bigint, name varchar(64),
address varchar(64), phone bigint, zip int, etc. Index 1 (id, name), Index
2(zip, phone), Index 3(phone) Each row in this table could occupy 8+64+64+8+
4
bytes. So each index associated with each row would occupy 8+64 + 8+8 + 8.
That's what I want. This would be used in later calculating the required
size based on number of records to be inserted.yodarules wrote:
> Hey Guys,
> I would like to enumerate all the indexes in a table and find the
> size of the index, when I mean the size not the total space occupied
> but the size of each row of the index. Eg: Table EMP has id bigint,
> name varchar(64), address varchar(64), phone bigint, zip int, etc.
> Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> this table could occupy 8+64+64+8+4 bytes. So each index associated
> with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> would be used in later calculating the required size based on number
> of records to be inserted.
See "Estimating the Size of a Table with a Clustered Index" in BOL.
David Gugick
Quest Software
www.quest.com|||Thanks David,
But thats not what I want, as I have explained in my post earlier. This
tells you the method to calcualte the space, which is what I might do when I
try to achieve my final goal. But the immediate goal is to dynamically find
the size of each non-clustered index on that table. The reason why I want t
o
do it dynamically is that, if I add or remove indexes, my SP should be able
to work with that rather than a hard coded value.
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||A couple of thoughts on your question.
1) SQL Server stores everything in 8KB pages and stores pages in 8 page
extents. Based on the fill factor and size of your index SQL Server will fi
t
as many rows from your index within the page as it can.
2) A non-clustered index always contains a reference to the key of the
clustered index or a pointer to a place in the heap if no clustered index
exists, in addition to the data in the non-clustered index. Additionally, a
varchar field does not always take up all 8 bytes. Varchars are variable
length and only stored with the bytes taken up by the value within the field
.
So, your example is not actually accurate in how big a row of the index (and
it is further dependent on whether the index is clustered or non-clustered)
would be.
I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
will also show you if there is fragmentation in those pages and extents. In
SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
dm_db_index_physical_stats also provides values for min,avg and max record
size. Which might be what you are looking for. I don't know if you can fin
d
that data for SQL 2000.
The article in BOL that David pointed you to is a good illustration of how
that calculation would be done manually for a particular table and its
indexes.
HTH
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"David Gugick" wrote:
> yodarules wrote:
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||I totally understand and am aware of the points mentioned. The main reason
why I want to do this to for size estimation. When we archive and delete
some records we store the number of rows deleted. When we reload the data
back, we want to do a space estimation and see if there's enough space to do
this operation. Its actaully much more complicated than what I have just
explained, but thats the idea. I don't want to do spaceused before and afte
r
for various other reasons and this seems to be the best case for us. If I
cannot dynamically find index sizes, then I have to hard code it some where,
again we are happy with an approximation, hence don't really care about
varchar, etc. Actually the tables I'm interested in doing this don't have
varchar.
"Ryan Powers" wrote:
[vbcol=seagreen]
> A couple of thoughts on your question.
> 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> extents. Based on the fill factor and size of your index SQL Server will
fit
> as many rows from your index within the page as it can.
> 2) A non-clustered index always contains a reference to the key of the
> clustered index or a pointer to a place in the heap if no clustered index
> exists, in addition to the data in the non-clustered index. Additionally,
a
> varchar field does not always take up all 8 bytes. Varchars are variable
> length and only stored with the bytes taken up by the value within the fie
ld.
> So, your example is not actually accurate in how big a row of the index (
and
> it is further dependent on whether the index is clustered or non-clustered
)
> would be.
> I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> use DBCC SHOWCONTIG to see how many pages and extents an index takes up.
It
> will also show you if there is fragmentation in those pages and extents.
In
> SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> dm_db_index_physical_stats also provides values for min,avg and max record
> size. Which might be what you are looking for. I don't know if you can f
ind
> that data for SQL 2000.
> The article in BOL that David pointed you to is a good illustration of how
> that calculation would be done manually for a particular table and its
> indexes.
> HTH
> --
> Ryan Powers
> Clarity Consulting
> http://www.claritycon.com
>
> "David Gugick" wrote:
>|||Okay.
So, it sounds like you are using sql 2000, and you are looking to
programmatically access the indexes, the columns that make up those indexes
and the size of the columns. So, you can estimate size. I could be jumping
to the wrong conclusions.
There are some undocumented SPs that could help you get there.
The following all seem to give you parts of what you want,
sp_MStablespace
sp_MSindexspace
sp_MShelpindex
sp_MShelpcolumns
sp_columns_rowset
For specifics about using any/ all of the above. Follow this link.
http://www.sql-server-performance.c...cumented_sp.asp
Note that since these are undocumented, they won't necessarily work in SQL
2005.
HTH
--
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"yodarules" wrote:
[vbcol=seagreen]
> I totally understand and am aware of the points mentioned. The main reaso
n
> why I want to do this to for size estimation. When we archive and delete
> some records we store the number of rows deleted. When we reload the data
> back, we want to do a space estimation and see if there's enough space to
do
> this operation. Its actaully much more complicated than what I have just
> explained, but thats the idea. I don't want to do spaceused before and af
ter
> for various other reasons and this seems to be the best case for us. If I
> cannot dynamically find index sizes, then I have to hard code it some wher
e,
> again we are happy with an approximation, hence don't really care about
> varchar, etc. Actually the tables I'm interested in doing this don't have
> varchar.
> "Ryan Powers" wrote:
>
Find the size of an index row
Hey Guys,
I would like to enumerate all the indexes in a table and find the size of
the index, when I mean the size not the total space occupied but the size of
each row of the index. Eg: Table EMP has id bigint, name varchar(64),
address varchar(64), phone bigint, zip int, etc. Index 1 (id, name), Index
2(zip, phone), Index 3(phone) Each row in this table could occupy 8+64+64+8+4
bytes. So each index associated with each row would occupy 8+64 + 8+8 + 8.
That's what I want. This would be used in later calculating the required
size based on number of records to be inserted.yodarules wrote:
> Hey Guys,
> I would like to enumerate all the indexes in a table and find the
> size of the index, when I mean the size not the total space occupied
> but the size of each row of the index. Eg: Table EMP has id bigint,
> name varchar(64), address varchar(64), phone bigint, zip int, etc.
> Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> this table could occupy 8+64+64+8+4 bytes. So each index associated
> with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> would be used in later calculating the required size based on number
> of records to be inserted.
See "Estimating the Size of a Table with a Clustered Index" in BOL.
--
David Gugick
Quest Software
www.quest.com|||Thanks David,
But thats not what I want, as I have explained in my post earlier. This
tells you the method to calcualte the space, which is what I might do when I
try to achieve my final goal. But the immediate goal is to dynamically find
the size of each non-clustered index on that table. The reason why I want to
do it dynamically is that, if I add or remove indexes, my SP should be able
to work with that rather than a hard coded value.
"David Gugick" wrote:
> yodarules wrote:
> > Hey Guys,
> >
> > I would like to enumerate all the indexes in a table and find the
> > size of the index, when I mean the size not the total space occupied
> > but the size of each row of the index. Eg: Table EMP has id bigint,
> > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > would be used in later calculating the required size based on number
> > of records to be inserted.
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||A couple of thoughts on your question.
1) SQL Server stores everything in 8KB pages and stores pages in 8 page
extents. Based on the fill factor and size of your index SQL Server will fit
as many rows from your index within the page as it can.
2) A non-clustered index always contains a reference to the key of the
clustered index or a pointer to a place in the heap if no clustered index
exists, in addition to the data in the non-clustered index. Additionally, a
varchar field does not always take up all 8 bytes. Varchars are variable
length and only stored with the bytes taken up by the value within the field.
So, your example is not actually accurate in how big a row of the index (and
it is further dependent on whether the index is clustered or non-clustered)
would be.
I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
will also show you if there is fragmentation in those pages and extents. In
SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
dm_db_index_physical_stats also provides values for min,avg and max record
size. Which might be what you are looking for. I don't know if you can find
that data for SQL 2000.
The article in BOL that David pointed you to is a good illustration of how
that calculation would be done manually for a particular table and its
indexes.
HTH
--
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"David Gugick" wrote:
> yodarules wrote:
> > Hey Guys,
> >
> > I would like to enumerate all the indexes in a table and find the
> > size of the index, when I mean the size not the total space occupied
> > but the size of each row of the index. Eg: Table EMP has id bigint,
> > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > would be used in later calculating the required size based on number
> > of records to be inserted.
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||I totally understand and am aware of the points mentioned. The main reason
why I want to do this to for size estimation. When we archive and delete
some records we store the number of rows deleted. When we reload the data
back, we want to do a space estimation and see if there's enough space to do
this operation. Its actaully much more complicated than what I have just
explained, but thats the idea. I don't want to do spaceused before and after
for various other reasons and this seems to be the best case for us. If I
cannot dynamically find index sizes, then I have to hard code it some where,
again we are happy with an approximation, hence don't really care about
varchar, etc. Actually the tables I'm interested in doing this don't have
varchar.
"Ryan Powers" wrote:
> A couple of thoughts on your question.
> 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> extents. Based on the fill factor and size of your index SQL Server will fit
> as many rows from your index within the page as it can.
> 2) A non-clustered index always contains a reference to the key of the
> clustered index or a pointer to a place in the heap if no clustered index
> exists, in addition to the data in the non-clustered index. Additionally, a
> varchar field does not always take up all 8 bytes. Varchars are variable
> length and only stored with the bytes taken up by the value within the field.
> So, your example is not actually accurate in how big a row of the index (and
> it is further dependent on whether the index is clustered or non-clustered)
> would be.
> I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
> will also show you if there is fragmentation in those pages and extents. In
> SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> dm_db_index_physical_stats also provides values for min,avg and max record
> size. Which might be what you are looking for. I don't know if you can find
> that data for SQL 2000.
> The article in BOL that David pointed you to is a good illustration of how
> that calculation would be done manually for a particular table and its
> indexes.
> HTH
> --
> Ryan Powers
> Clarity Consulting
> http://www.claritycon.com
>
> "David Gugick" wrote:
> > yodarules wrote:
> > > Hey Guys,
> > >
> > > I would like to enumerate all the indexes in a table and find the
> > > size of the index, when I mean the size not the total space occupied
> > > but the size of each row of the index. Eg: Table EMP has id bigint,
> > > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > > would be used in later calculating the required size based on number
> > > of records to be inserted.
> >
> > See "Estimating the Size of a Table with a Clustered Index" in BOL.
> >
> > --
> > David Gugick
> > Quest Software
> > www.quest.com
> >|||Okay.
So, it sounds like you are using sql 2000, and you are looking to
programmatically access the indexes, the columns that make up those indexes
and the size of the columns. So, you can estimate size. I could be jumping
to the wrong conclusions.
There are some undocumented SPs that could help you get there.
The following all seem to give you parts of what you want,
sp_MStablespace
sp_MSindexspace
sp_MShelpindex
sp_MShelpcolumns
sp_columns_rowset
For specifics about using any/ all of the above. Follow this link.
http://www.sql-server-performance.com/ac_sql_server_2000_undocumented_sp.asp
Note that since these are undocumented, they won't necessarily work in SQL
2005.
HTH
--
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"yodarules" wrote:
> I totally understand and am aware of the points mentioned. The main reason
> why I want to do this to for size estimation. When we archive and delete
> some records we store the number of rows deleted. When we reload the data
> back, we want to do a space estimation and see if there's enough space to do
> this operation. Its actaully much more complicated than what I have just
> explained, but thats the idea. I don't want to do spaceused before and after
> for various other reasons and this seems to be the best case for us. If I
> cannot dynamically find index sizes, then I have to hard code it some where,
> again we are happy with an approximation, hence don't really care about
> varchar, etc. Actually the tables I'm interested in doing this don't have
> varchar.
> "Ryan Powers" wrote:
> > A couple of thoughts on your question.
> >
> > 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> > extents. Based on the fill factor and size of your index SQL Server will fit
> > as many rows from your index within the page as it can.
> > 2) A non-clustered index always contains a reference to the key of the
> > clustered index or a pointer to a place in the heap if no clustered index
> > exists, in addition to the data in the non-clustered index. Additionally, a
> > varchar field does not always take up all 8 bytes. Varchars are variable
> > length and only stored with the bytes taken up by the value within the field.
> > So, your example is not actually accurate in how big a row of the index (and
> > it is further dependent on whether the index is clustered or non-clustered)
> > would be.
> >
> > I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> > use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
> > will also show you if there is fragmentation in those pages and extents. In
> > SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> > dm_db_index_physical_stats also provides values for min,avg and max record
> > size. Which might be what you are looking for. I don't know if you can find
> > that data for SQL 2000.
> >
> > The article in BOL that David pointed you to is a good illustration of how
> > that calculation would be done manually for a particular table and its
> > indexes.
> >
> > HTH
> >
> > --
> > Ryan Powers
> > Clarity Consulting
> > http://www.claritycon.com
> >
> >
> > "David Gugick" wrote:
> >
> > > yodarules wrote:
> > > > Hey Guys,
> > > >
> > > > I would like to enumerate all the indexes in a table and find the
> > > > size of the index, when I mean the size not the total space occupied
> > > > but the size of each row of the index. Eg: Table EMP has id bigint,
> > > > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > > > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > > > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > > > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > > > would be used in later calculating the required size based on number
> > > > of records to be inserted.
> > >
> > > See "Estimating the Size of a Table with a Clustered Index" in BOL.
> > >
> > > --
> > > David Gugick
> > > Quest Software
> > > www.quest.com
> > >
I would like to enumerate all the indexes in a table and find the size of
the index, when I mean the size not the total space occupied but the size of
each row of the index. Eg: Table EMP has id bigint, name varchar(64),
address varchar(64), phone bigint, zip int, etc. Index 1 (id, name), Index
2(zip, phone), Index 3(phone) Each row in this table could occupy 8+64+64+8+4
bytes. So each index associated with each row would occupy 8+64 + 8+8 + 8.
That's what I want. This would be used in later calculating the required
size based on number of records to be inserted.yodarules wrote:
> Hey Guys,
> I would like to enumerate all the indexes in a table and find the
> size of the index, when I mean the size not the total space occupied
> but the size of each row of the index. Eg: Table EMP has id bigint,
> name varchar(64), address varchar(64), phone bigint, zip int, etc.
> Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> this table could occupy 8+64+64+8+4 bytes. So each index associated
> with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> would be used in later calculating the required size based on number
> of records to be inserted.
See "Estimating the Size of a Table with a Clustered Index" in BOL.
--
David Gugick
Quest Software
www.quest.com|||Thanks David,
But thats not what I want, as I have explained in my post earlier. This
tells you the method to calcualte the space, which is what I might do when I
try to achieve my final goal. But the immediate goal is to dynamically find
the size of each non-clustered index on that table. The reason why I want to
do it dynamically is that, if I add or remove indexes, my SP should be able
to work with that rather than a hard coded value.
"David Gugick" wrote:
> yodarules wrote:
> > Hey Guys,
> >
> > I would like to enumerate all the indexes in a table and find the
> > size of the index, when I mean the size not the total space occupied
> > but the size of each row of the index. Eg: Table EMP has id bigint,
> > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > would be used in later calculating the required size based on number
> > of records to be inserted.
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||A couple of thoughts on your question.
1) SQL Server stores everything in 8KB pages and stores pages in 8 page
extents. Based on the fill factor and size of your index SQL Server will fit
as many rows from your index within the page as it can.
2) A non-clustered index always contains a reference to the key of the
clustered index or a pointer to a place in the heap if no clustered index
exists, in addition to the data in the non-clustered index. Additionally, a
varchar field does not always take up all 8 bytes. Varchars are variable
length and only stored with the bytes taken up by the value within the field.
So, your example is not actually accurate in how big a row of the index (and
it is further dependent on whether the index is clustered or non-clustered)
would be.
I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
will also show you if there is fragmentation in those pages and extents. In
SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
dm_db_index_physical_stats also provides values for min,avg and max record
size. Which might be what you are looking for. I don't know if you can find
that data for SQL 2000.
The article in BOL that David pointed you to is a good illustration of how
that calculation would be done manually for a particular table and its
indexes.
HTH
--
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"David Gugick" wrote:
> yodarules wrote:
> > Hey Guys,
> >
> > I would like to enumerate all the indexes in a table and find the
> > size of the index, when I mean the size not the total space occupied
> > but the size of each row of the index. Eg: Table EMP has id bigint,
> > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > would be used in later calculating the required size based on number
> > of records to be inserted.
> See "Estimating the Size of a Table with a Clustered Index" in BOL.
> --
> David Gugick
> Quest Software
> www.quest.com
>|||I totally understand and am aware of the points mentioned. The main reason
why I want to do this to for size estimation. When we archive and delete
some records we store the number of rows deleted. When we reload the data
back, we want to do a space estimation and see if there's enough space to do
this operation. Its actaully much more complicated than what I have just
explained, but thats the idea. I don't want to do spaceused before and after
for various other reasons and this seems to be the best case for us. If I
cannot dynamically find index sizes, then I have to hard code it some where,
again we are happy with an approximation, hence don't really care about
varchar, etc. Actually the tables I'm interested in doing this don't have
varchar.
"Ryan Powers" wrote:
> A couple of thoughts on your question.
> 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> extents. Based on the fill factor and size of your index SQL Server will fit
> as many rows from your index within the page as it can.
> 2) A non-clustered index always contains a reference to the key of the
> clustered index or a pointer to a place in the heap if no clustered index
> exists, in addition to the data in the non-clustered index. Additionally, a
> varchar field does not always take up all 8 bytes. Varchars are variable
> length and only stored with the bytes taken up by the value within the field.
> So, your example is not actually accurate in how big a row of the index (and
> it is further dependent on whether the index is clustered or non-clustered)
> would be.
> I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
> will also show you if there is fragmentation in those pages and extents. In
> SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> dm_db_index_physical_stats also provides values for min,avg and max record
> size. Which might be what you are looking for. I don't know if you can find
> that data for SQL 2000.
> The article in BOL that David pointed you to is a good illustration of how
> that calculation would be done manually for a particular table and its
> indexes.
> HTH
> --
> Ryan Powers
> Clarity Consulting
> http://www.claritycon.com
>
> "David Gugick" wrote:
> > yodarules wrote:
> > > Hey Guys,
> > >
> > > I would like to enumerate all the indexes in a table and find the
> > > size of the index, when I mean the size not the total space occupied
> > > but the size of each row of the index. Eg: Table EMP has id bigint,
> > > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > > would be used in later calculating the required size based on number
> > > of records to be inserted.
> >
> > See "Estimating the Size of a Table with a Clustered Index" in BOL.
> >
> > --
> > David Gugick
> > Quest Software
> > www.quest.com
> >|||Okay.
So, it sounds like you are using sql 2000, and you are looking to
programmatically access the indexes, the columns that make up those indexes
and the size of the columns. So, you can estimate size. I could be jumping
to the wrong conclusions.
There are some undocumented SPs that could help you get there.
The following all seem to give you parts of what you want,
sp_MStablespace
sp_MSindexspace
sp_MShelpindex
sp_MShelpcolumns
sp_columns_rowset
For specifics about using any/ all of the above. Follow this link.
http://www.sql-server-performance.com/ac_sql_server_2000_undocumented_sp.asp
Note that since these are undocumented, they won't necessarily work in SQL
2005.
HTH
--
Ryan Powers
Clarity Consulting
http://www.claritycon.com
"yodarules" wrote:
> I totally understand and am aware of the points mentioned. The main reason
> why I want to do this to for size estimation. When we archive and delete
> some records we store the number of rows deleted. When we reload the data
> back, we want to do a space estimation and see if there's enough space to do
> this operation. Its actaully much more complicated than what I have just
> explained, but thats the idea. I don't want to do spaceused before and after
> for various other reasons and this seems to be the best case for us. If I
> cannot dynamically find index sizes, then I have to hard code it some where,
> again we are happy with an approximation, hence don't really care about
> varchar, etc. Actually the tables I'm interested in doing this don't have
> varchar.
> "Ryan Powers" wrote:
> > A couple of thoughts on your question.
> >
> > 1) SQL Server stores everything in 8KB pages and stores pages in 8 page
> > extents. Based on the fill factor and size of your index SQL Server will fit
> > as many rows from your index within the page as it can.
> > 2) A non-clustered index always contains a reference to the key of the
> > clustered index or a pointer to a place in the heap if no clustered index
> > exists, in addition to the data in the non-clustered index. Additionally, a
> > varchar field does not always take up all 8 bytes. Varchars are variable
> > length and only stored with the bytes taken up by the value within the field.
> > So, your example is not actually accurate in how big a row of the index (and
> > it is further dependent on whether the index is clustered or non-clustered)
> > would be.
> >
> > I'm not sure what exactly you are trying to do. But, in SQL 2000 you can
> > use DBCC SHOWCONTIG to see how many pages and extents an index takes up. It
> > will also show you if there is fragmentation in those pages and extents. In
> > SQL 2005 this command is replaced by the DMV dm_db_index_physical_stats.
> > dm_db_index_physical_stats also provides values for min,avg and max record
> > size. Which might be what you are looking for. I don't know if you can find
> > that data for SQL 2000.
> >
> > The article in BOL that David pointed you to is a good illustration of how
> > that calculation would be done manually for a particular table and its
> > indexes.
> >
> > HTH
> >
> > --
> > Ryan Powers
> > Clarity Consulting
> > http://www.claritycon.com
> >
> >
> > "David Gugick" wrote:
> >
> > > yodarules wrote:
> > > > Hey Guys,
> > > >
> > > > I would like to enumerate all the indexes in a table and find the
> > > > size of the index, when I mean the size not the total space occupied
> > > > but the size of each row of the index. Eg: Table EMP has id bigint,
> > > > name varchar(64), address varchar(64), phone bigint, zip int, etc.
> > > > Index 1 (id, name), Index 2(zip, phone), Index 3(phone) Each row in
> > > > this table could occupy 8+64+64+8+4 bytes. So each index associated
> > > > with each row would occupy 8+64 + 8+8 + 8. That's what I want. This
> > > > would be used in later calculating the required size based on number
> > > > of records to be inserted.
> > >
> > > See "Estimating the Size of a Table with a Clustered Index" in BOL.
> > >
> > > --
> > > David Gugick
> > > Quest Software
> > > www.quest.com
> > >
find the most recent update row
Hi,
I have the following table.
Object_ID Rating Effective_Date
A 1 5/15/03
A 2 6/30/03
A 1 7/15/03
B 2 5/15/03
B 4 7/20/03
Another table is
Object_ID quantity query_date
A 1000 7/12/03
B 2000 7/12/03(must be the same for
all row in the column query_date)
and I wish to return the lastest Rating like
Object_ID quantity rating effective date
A 1000 2 6/30/03
B 2000 2 5/15/03
Could anyone have a one line T-sql to perform this and
advise?
I have tried to use a min function to compare the query
date and effective and choose the min non-negative number.
But it seems work fine for single row record, but if
multiple record, it seems difficult to do the grouping.
Thx..(untested)
SELECT t2.Object_ID, t2.quantity, t1.rating, t1.Effective_Date
FROM table1 t1
INNER JOIN table2 t2
ON t1.Object_ID = t2.Object_ID
WHERE t2.Effective_Date = (SELECT MAX(Effective_Date) FROM table2 t3
WHERE t3.Effective_Date < t1.query_date
AND t3.Object_ID = t2.Object_ID)
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"AW" <alexwong97@.hotmail.com> wrote in message
news:077b01c34497$dffc24e0$a001280a@.phx.gbl...
> Hi,
> I have the following table.
> Object_ID Rating Effective_Date
> A 1 5/15/03
> A 2 6/30/03
> A 1 7/15/03
> B 2 5/15/03
> B 4 7/20/03
> Another table is
> Object_ID quantity query_date
> A 1000 7/12/03
> B 2000 7/12/03(must be the same for
> all row in the column query_date)
> and I wish to return the lastest Rating like
> Object_ID quantity rating effective date
> A 1000 2 6/30/03
> B 2000 2 5/15/03
>
> Could anyone have a one line T-sql to perform this and
> advise?
> I have tried to use a min function to compare the query
> date and effective and choose the min non-negative number.
> But it seems work fine for single row record, but if
> multiple record, it seems difficult to do the grouping.
> Thx..sql
I have the following table.
Object_ID Rating Effective_Date
A 1 5/15/03
A 2 6/30/03
A 1 7/15/03
B 2 5/15/03
B 4 7/20/03
Another table is
Object_ID quantity query_date
A 1000 7/12/03
B 2000 7/12/03(must be the same for
all row in the column query_date)
and I wish to return the lastest Rating like
Object_ID quantity rating effective date
A 1000 2 6/30/03
B 2000 2 5/15/03
Could anyone have a one line T-sql to perform this and
advise?
I have tried to use a min function to compare the query
date and effective and choose the min non-negative number.
But it seems work fine for single row record, but if
multiple record, it seems difficult to do the grouping.
Thx..(untested)
SELECT t2.Object_ID, t2.quantity, t1.rating, t1.Effective_Date
FROM table1 t1
INNER JOIN table2 t2
ON t1.Object_ID = t2.Object_ID
WHERE t2.Effective_Date = (SELECT MAX(Effective_Date) FROM table2 t3
WHERE t3.Effective_Date < t1.query_date
AND t3.Object_ID = t2.Object_ID)
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"AW" <alexwong97@.hotmail.com> wrote in message
news:077b01c34497$dffc24e0$a001280a@.phx.gbl...
> Hi,
> I have the following table.
> Object_ID Rating Effective_Date
> A 1 5/15/03
> A 2 6/30/03
> A 1 7/15/03
> B 2 5/15/03
> B 4 7/20/03
> Another table is
> Object_ID quantity query_date
> A 1000 7/12/03
> B 2000 7/12/03(must be the same for
> all row in the column query_date)
> and I wish to return the lastest Rating like
> Object_ID quantity rating effective date
> A 1000 2 6/30/03
> B 2000 2 5/15/03
>
> Could anyone have a one line T-sql to perform this and
> advise?
> I have tried to use a min function to compare the query
> date and effective and choose the min non-negative number.
> But it seems work fine for single row record, but if
> multiple record, it seems difficult to do the grouping.
> Thx..sql
Find the greatest of three columns per each row and display
Hi,
i have a problem where in i have to display the greatest marks scored by each student.
How can i do this? Is there any built in TSQL function.
Rgds..,
Aazad
You either use a CASE Statement or Pivot the data to make a relational query possible through the aggregation functions.SELECT CASE WHEN COlA > COLB THEN COLA ELSE
(CASE WHEN COLB > COLC THEN COLB ELSE COLC END)
END
FROM SomeTable
HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de
|||Hi,
I worked. Nice one. I got the other way too..!
Rgds..,
Aazad.
Monday, March 19, 2012
Find size of row in table
Is there any way to find the size of a particular row in a table?
Thanks,
TomTomT wrote:
> Is there any way to find the size of a particular row in a table?
> Thanks,
> Tom
For storage usage, you can use datalength(). For number of characters,
you can use LEN().
create table test1 (
id int identity,
col1 int,
col2 varchar(100),
col3 nvarchar(50),
col4 nchar(10),
col5 datetime)
insert into test1 values (5, 'Hi there', 'hi there unicode', '',
getdate())
Select * from test1
Select datalength(col1) +
datalength(col2) +
datalength(col3) +
datalength(col4) +
datalength(col5)
From test1
Where ID = 1
David Gugick
Imceda Software
www.imceda.com|||David,
Thanks for your reply. I tried your suggestion, and after adjusting for
possible null values in some of the columns, got the results I needed.
The reason I'm doing this is due to a strange problem with a row in a table.
This table has a variety of columns, one of which is for notes, and it is
varchar(3000).
On the row in question, I am trying to insert notes which are 1900
characters (including spaces, line breakes, etc.).
The problem is, for some reason I cannot insert more than 1023 characters
into the column. I'm testing this by going directly into the row in the
table, and typing in the characters. When I try to go beyond 1023 characters,
all that happens is my system beeps, and no characters are accepted. Then
entire size of the row, via your suggestion, is 1239.
Curiously, the row just before this has a total length of 2481, and the
number of characters in the notes column is 2258.
I'm stumped as to why I can't get more text into the column, I even removed
some characters from other columns in the row, just to see if this would make
a difference, but it did not.
Do you have any idea what might be happening here?
Thanks for your help,
Tom
"David Gugick" wrote:
> TomT wrote:
> > Is there any way to find the size of a particular row in a table?
> >
> > Thanks,
> >
> > Tom
> For storage usage, you can use datalength(). For number of characters,
> you can use LEN().
> create table test1 (
> id int identity,
> col1 int,
> col2 varchar(100),
> col3 nvarchar(50),
> col4 nchar(10),
> col5 datetime)
> insert into test1 values (5, 'Hi there', 'hi there unicode', '',
> getdate())
> Select * from test1
> Select datalength(col1) +
> datalength(col2) +
> datalength(col3) +
> datalength(col4) +
> datalength(col5)
> From test1
> Where ID = 1
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||TomT wrote:
> David,
> Thanks for your reply. I tried your suggestion, and after adjusting
> for possible null values in some of the columns, got the results I
> needed.
> The reason I'm doing this is due to a strange problem with a row in a
> table. This table has a variety of columns, one of which is for
> notes, and it is varchar(3000).
> On the row in question, I am trying to insert notes which are 1900
> characters (including spaces, line breakes, etc.).
> The problem is, for some reason I cannot insert more than 1023
> characters into the column. I'm testing this by going directly into
> the row in the table, and typing in the characters. When I try to go
> beyond 1023 characters, all that happens is my system beeps, and no
> characters are accepted. Then entire size of the row, via your
> suggestion, is 1239.
> Curiously, the row just before this has a total length of 2481, and
> the number of characters in the notes column is 2258.
> I'm stumped as to why I can't get more text into the column, I even
> removed some characters from other columns in the row, just to see if
> this would make a difference, but it did not.
> Do you have any idea what might be happening here?
> Thanks for your help,
> Tom
Unless this is a small table, you should probably get that varchar(3000)
out of there or change it to a text/ntext. If that column is even 1/3
full, you'll end up with very large row sizes and decrease your row
density (which slows everything down). 8060 is the limit on row length,
but it doesn't appear you are near that.
You say, "by going directly into the row in the table". That term means
nothing in the relational database world. There is no way to go directly
into a row and edit, short of opening a data file in a editor and
directly manipulating the file. You are using some application and that
application likely has column entry limits of 1K or 1024 characters. Try
entering the data using SQL or another application.
--
David Gugick
Imceda Software
www.imceda.com
Thanks,
TomTomT wrote:
> Is there any way to find the size of a particular row in a table?
> Thanks,
> Tom
For storage usage, you can use datalength(). For number of characters,
you can use LEN().
create table test1 (
id int identity,
col1 int,
col2 varchar(100),
col3 nvarchar(50),
col4 nchar(10),
col5 datetime)
insert into test1 values (5, 'Hi there', 'hi there unicode', '',
getdate())
Select * from test1
Select datalength(col1) +
datalength(col2) +
datalength(col3) +
datalength(col4) +
datalength(col5)
From test1
Where ID = 1
David Gugick
Imceda Software
www.imceda.com|||David,
Thanks for your reply. I tried your suggestion, and after adjusting for
possible null values in some of the columns, got the results I needed.
The reason I'm doing this is due to a strange problem with a row in a table.
This table has a variety of columns, one of which is for notes, and it is
varchar(3000).
On the row in question, I am trying to insert notes which are 1900
characters (including spaces, line breakes, etc.).
The problem is, for some reason I cannot insert more than 1023 characters
into the column. I'm testing this by going directly into the row in the
table, and typing in the characters. When I try to go beyond 1023 characters,
all that happens is my system beeps, and no characters are accepted. Then
entire size of the row, via your suggestion, is 1239.
Curiously, the row just before this has a total length of 2481, and the
number of characters in the notes column is 2258.
I'm stumped as to why I can't get more text into the column, I even removed
some characters from other columns in the row, just to see if this would make
a difference, but it did not.
Do you have any idea what might be happening here?
Thanks for your help,
Tom
"David Gugick" wrote:
> TomT wrote:
> > Is there any way to find the size of a particular row in a table?
> >
> > Thanks,
> >
> > Tom
> For storage usage, you can use datalength(). For number of characters,
> you can use LEN().
> create table test1 (
> id int identity,
> col1 int,
> col2 varchar(100),
> col3 nvarchar(50),
> col4 nchar(10),
> col5 datetime)
> insert into test1 values (5, 'Hi there', 'hi there unicode', '',
> getdate())
> Select * from test1
> Select datalength(col1) +
> datalength(col2) +
> datalength(col3) +
> datalength(col4) +
> datalength(col5)
> From test1
> Where ID = 1
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||TomT wrote:
> David,
> Thanks for your reply. I tried your suggestion, and after adjusting
> for possible null values in some of the columns, got the results I
> needed.
> The reason I'm doing this is due to a strange problem with a row in a
> table. This table has a variety of columns, one of which is for
> notes, and it is varchar(3000).
> On the row in question, I am trying to insert notes which are 1900
> characters (including spaces, line breakes, etc.).
> The problem is, for some reason I cannot insert more than 1023
> characters into the column. I'm testing this by going directly into
> the row in the table, and typing in the characters. When I try to go
> beyond 1023 characters, all that happens is my system beeps, and no
> characters are accepted. Then entire size of the row, via your
> suggestion, is 1239.
> Curiously, the row just before this has a total length of 2481, and
> the number of characters in the notes column is 2258.
> I'm stumped as to why I can't get more text into the column, I even
> removed some characters from other columns in the row, just to see if
> this would make a difference, but it did not.
> Do you have any idea what might be happening here?
> Thanks for your help,
> Tom
Unless this is a small table, you should probably get that varchar(3000)
out of there or change it to a text/ntext. If that column is even 1/3
full, you'll end up with very large row sizes and decrease your row
density (which slows everything down). 8060 is the limit on row length,
but it doesn't appear you are near that.
You say, "by going directly into the row in the table". That term means
nothing in the relational database world. There is no way to go directly
into a row and edit, short of opening a data file in a editor and
directly manipulating the file. You are using some application and that
application likely has column entry limits of 1K or 1024 characters. Try
entering the data using SQL or another application.
--
David Gugick
Imceda Software
www.imceda.com
Find size of row in table
Is there any way to find the size of a particular row in a table?
Thanks,
Tom
TomT wrote:
> Is there any way to find the size of a particular row in a table?
> Thanks,
> Tom
For storage usage, you can use datalength(). For number of characters,
you can use LEN().
create table test1 (
id int identity,
col1 int,
col2 varchar(100),
col3 nvarchar(50),
col4 nchar(10),
col5 datetime)
insert into test1 values (5, 'Hi there', 'hi there unicode', '',
getdate())
Select * from test1
Select datalength(col1) +
datalength(col2) +
datalength(col3) +
datalength(col4) +
datalength(col5)
From test1
Where ID = 1
David Gugick
Imceda Software
www.imceda.com
|||David,
Thanks for your reply. I tried your suggestion, and after adjusting for
possible null values in some of the columns, got the results I needed.
The reason I'm doing this is due to a strange problem with a row in a table.
This table has a variety of columns, one of which is for notes, and it is
varchar(3000).
On the row in question, I am trying to insert notes which are 1900
characters (including spaces, line breakes, etc.).
The problem is, for some reason I cannot insert more than 1023 characters
into the column. I'm testing this by going directly into the row in the
table, and typing in the characters. When I try to go beyond 1023 characters,
all that happens is my system beeps, and no characters are accepted. Then
entire size of the row, via your suggestion, is 1239.
Curiously, the row just before this has a total length of 2481, and the
number of characters in the notes column is 2258.
I'm stumped as to why I can't get more text into the column, I even removed
some characters from other columns in the row, just to see if this would make
a difference, but it did not.
Do you have any idea what might be happening here?
Thanks for your help,
Tom
"David Gugick" wrote:
> TomT wrote:
> For storage usage, you can use datalength(). For number of characters,
> you can use LEN().
> create table test1 (
> id int identity,
> col1 int,
> col2 varchar(100),
> col3 nvarchar(50),
> col4 nchar(10),
> col5 datetime)
> insert into test1 values (5, 'Hi there', 'hi there unicode', '',
> getdate())
> Select * from test1
> Select datalength(col1) +
> datalength(col2) +
> datalength(col3) +
> datalength(col4) +
> datalength(col5)
> From test1
> Where ID = 1
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||TomT wrote:
> David,
> Thanks for your reply. I tried your suggestion, and after adjusting
> for possible null values in some of the columns, got the results I
> needed.
> The reason I'm doing this is due to a strange problem with a row in a
> table. This table has a variety of columns, one of which is for
> notes, and it is varchar(3000).
> On the row in question, I am trying to insert notes which are 1900
> characters (including spaces, line breakes, etc.).
> The problem is, for some reason I cannot insert more than 1023
> characters into the column. I'm testing this by going directly into
> the row in the table, and typing in the characters. When I try to go
> beyond 1023 characters, all that happens is my system beeps, and no
> characters are accepted. Then entire size of the row, via your
> suggestion, is 1239.
> Curiously, the row just before this has a total length of 2481, and
> the number of characters in the notes column is 2258.
> I'm stumped as to why I can't get more text into the column, I even
> removed some characters from other columns in the row, just to see if
> this would make a difference, but it did not.
> Do you have any idea what might be happening here?
> Thanks for your help,
> Tom
Unless this is a small table, you should probably get that varchar(3000)
out of there or change it to a text/ntext. If that column is even 1/3
full, you'll end up with very large row sizes and decrease your row
density (which slows everything down). 8060 is the limit on row length,
but it doesn't appear you are near that.
You say, "by going directly into the row in the table". That term means
nothing in the relational database world. There is no way to go directly
into a row and edit, short of opening a data file in a editor and
directly manipulating the file. You are using some application and that
application likely has column entry limits of 1K or 1024 characters. Try
entering the data using SQL or another application.
David Gugick
Imceda Software
www.imceda.com
Thanks,
Tom
TomT wrote:
> Is there any way to find the size of a particular row in a table?
> Thanks,
> Tom
For storage usage, you can use datalength(). For number of characters,
you can use LEN().
create table test1 (
id int identity,
col1 int,
col2 varchar(100),
col3 nvarchar(50),
col4 nchar(10),
col5 datetime)
insert into test1 values (5, 'Hi there', 'hi there unicode', '',
getdate())
Select * from test1
Select datalength(col1) +
datalength(col2) +
datalength(col3) +
datalength(col4) +
datalength(col5)
From test1
Where ID = 1
David Gugick
Imceda Software
www.imceda.com
|||David,
Thanks for your reply. I tried your suggestion, and after adjusting for
possible null values in some of the columns, got the results I needed.
The reason I'm doing this is due to a strange problem with a row in a table.
This table has a variety of columns, one of which is for notes, and it is
varchar(3000).
On the row in question, I am trying to insert notes which are 1900
characters (including spaces, line breakes, etc.).
The problem is, for some reason I cannot insert more than 1023 characters
into the column. I'm testing this by going directly into the row in the
table, and typing in the characters. When I try to go beyond 1023 characters,
all that happens is my system beeps, and no characters are accepted. Then
entire size of the row, via your suggestion, is 1239.
Curiously, the row just before this has a total length of 2481, and the
number of characters in the notes column is 2258.
I'm stumped as to why I can't get more text into the column, I even removed
some characters from other columns in the row, just to see if this would make
a difference, but it did not.
Do you have any idea what might be happening here?
Thanks for your help,
Tom
"David Gugick" wrote:
> TomT wrote:
> For storage usage, you can use datalength(). For number of characters,
> you can use LEN().
> create table test1 (
> id int identity,
> col1 int,
> col2 varchar(100),
> col3 nvarchar(50),
> col4 nchar(10),
> col5 datetime)
> insert into test1 values (5, 'Hi there', 'hi there unicode', '',
> getdate())
> Select * from test1
> Select datalength(col1) +
> datalength(col2) +
> datalength(col3) +
> datalength(col4) +
> datalength(col5)
> From test1
> Where ID = 1
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||TomT wrote:
> David,
> Thanks for your reply. I tried your suggestion, and after adjusting
> for possible null values in some of the columns, got the results I
> needed.
> The reason I'm doing this is due to a strange problem with a row in a
> table. This table has a variety of columns, one of which is for
> notes, and it is varchar(3000).
> On the row in question, I am trying to insert notes which are 1900
> characters (including spaces, line breakes, etc.).
> The problem is, for some reason I cannot insert more than 1023
> characters into the column. I'm testing this by going directly into
> the row in the table, and typing in the characters. When I try to go
> beyond 1023 characters, all that happens is my system beeps, and no
> characters are accepted. Then entire size of the row, via your
> suggestion, is 1239.
> Curiously, the row just before this has a total length of 2481, and
> the number of characters in the notes column is 2258.
> I'm stumped as to why I can't get more text into the column, I even
> removed some characters from other columns in the row, just to see if
> this would make a difference, but it did not.
> Do you have any idea what might be happening here?
> Thanks for your help,
> Tom
Unless this is a small table, you should probably get that varchar(3000)
out of there or change it to a text/ntext. If that column is even 1/3
full, you'll end up with very large row sizes and decrease your row
density (which slows everything down). 8060 is the limit on row length,
but it doesn't appear you are near that.
You say, "by going directly into the row in the table". That term means
nothing in the relational database world. There is no way to go directly
into a row and edit, short of opening a data file in a editor and
directly manipulating the file. You are using some application and that
application likely has column entry limits of 1K or 1024 characters. Try
entering the data using SQL or another application.
David Gugick
Imceda Software
www.imceda.com
Monday, March 12, 2012
Find Records with duplicate values in 1 row keyed to 2 other rows
My table looks like this:
rowid, tenantid, chargecode, date, amount, ..........
I need to get all rows that have multiple entries on the same date(mm/yyyy)
for a given combination of tenantID and chargecode.
for example:
1 tenant1 5 1/1999 $500
2 tenant1 6 1/1999 $25
3 tenant1 5 1/1999 $35
4 tenant2 5 1/1999 $30
5 tenant2 5 2/1999 $30
6 tenant2 6 2/1999 $40
7 tenant2 6 2/1999 $50
I would need to get that tenant1 has 2 records with a chargecode of 5 for
1/1999
and that tenant2 has 2 records with chargecode 6 for 2/1999
SELECT tenantID, chargecode, date, count(*) as Duplciates
FROM TheTable
GROUP BY tenantID, chargecode, date
HAVING count(*) > 1
Roy Harvey
Beacon Falls, CT
On Thu, 22 Feb 2007 16:19:00 -0800, Joe S. <joecrew@.news.postalias>
wrote:
>My table looks like this:
>rowid, tenantid, chargecode, date, amount, ..........
>
>I need to get all rows that have multiple entries on the same date(mm/yyyy)
>for a given combination of tenantID and chargecode.
>for example:
>1 tenant1 5 1/1999 $500
>2 tenant1 6 1/1999 $25
>3 tenant1 5 1/1999 $35
>4 tenant2 5 1/1999 $30
>5 tenant2 5 2/1999 $30
>6 tenant2 6 2/1999 $40
>7 tenant2 6 2/1999 $50
>I would need to get that tenant1 has 2 records with a chargecode of 5 for
>1/1999
>and that tenant2 has 2 records with chargecode 6 for 2/1999
rowid, tenantid, chargecode, date, amount, ..........
I need to get all rows that have multiple entries on the same date(mm/yyyy)
for a given combination of tenantID and chargecode.
for example:
1 tenant1 5 1/1999 $500
2 tenant1 6 1/1999 $25
3 tenant1 5 1/1999 $35
4 tenant2 5 1/1999 $30
5 tenant2 5 2/1999 $30
6 tenant2 6 2/1999 $40
7 tenant2 6 2/1999 $50
I would need to get that tenant1 has 2 records with a chargecode of 5 for
1/1999
and that tenant2 has 2 records with chargecode 6 for 2/1999
SELECT tenantID, chargecode, date, count(*) as Duplciates
FROM TheTable
GROUP BY tenantID, chargecode, date
HAVING count(*) > 1
Roy Harvey
Beacon Falls, CT
On Thu, 22 Feb 2007 16:19:00 -0800, Joe S. <joecrew@.news.postalias>
wrote:
>My table looks like this:
>rowid, tenantid, chargecode, date, amount, ..........
>
>I need to get all rows that have multiple entries on the same date(mm/yyyy)
>for a given combination of tenantID and chargecode.
>for example:
>1 tenant1 5 1/1999 $500
>2 tenant1 6 1/1999 $25
>3 tenant1 5 1/1999 $35
>4 tenant2 5 1/1999 $30
>5 tenant2 5 2/1999 $30
>6 tenant2 6 2/1999 $40
>7 tenant2 6 2/1999 $50
>I would need to get that tenant1 has 2 records with a chargecode of 5 for
>1/1999
>and that tenant2 has 2 records with chargecode 6 for 2/1999
Find Records with duplicate values in 1 row keyed to 2 other rows
My table looks like this:
rowid, tenantid, chargecode, date, amount, ..........
I need to get all rows that have multiple entries on the same date(mm/yyyy)
for a given combination of tenantID and chargecode.
for example:
1 tenant1 5 1/1999 $500
2 tenant1 6 1/1999 $25
3 tenant1 5 1/1999 $35
4 tenant2 5 1/1999 $30
5 tenant2 5 2/1999 $30
6 tenant2 6 2/1999 $40
7 tenant2 6 2/1999 $50
I would need to get that tenant1 has 2 records with a chargecode of 5 for
1/1999
and that tenant2 has 2 records with chargecode 6 for 2/1999SELECT tenantID, chargecode, date, count(*) as Duplciates
FROM TheTable
GROUP BY tenantID, chargecode, date
HAVING count(*) > 1
Roy Harvey
Beacon Falls, CT
On Thu, 22 Feb 2007 16:19:00 -0800, Joe S. <joecrew@.news.postalias>
wrote:
>My table looks like this:
>rowid, tenantid, chargecode, date, amount, ..........
>
>I need to get all rows that have multiple entries on the same date(mm/yyyy)
>for a given combination of tenantID and chargecode.
>for example:
>1 tenant1 5 1/1999 $500
>2 tenant1 6 1/1999 $25
>3 tenant1 5 1/1999 $35
>4 tenant2 5 1/1999 $30
>5 tenant2 5 2/1999 $30
>6 tenant2 6 2/1999 $40
>7 tenant2 6 2/1999 $50
>I would need to get that tenant1 has 2 records with a chargecode of 5 for
>1/1999
>and that tenant2 has 2 records with chargecode 6 for 2/1999
rowid, tenantid, chargecode, date, amount, ..........
I need to get all rows that have multiple entries on the same date(mm/yyyy)
for a given combination of tenantID and chargecode.
for example:
1 tenant1 5 1/1999 $500
2 tenant1 6 1/1999 $25
3 tenant1 5 1/1999 $35
4 tenant2 5 1/1999 $30
5 tenant2 5 2/1999 $30
6 tenant2 6 2/1999 $40
7 tenant2 6 2/1999 $50
I would need to get that tenant1 has 2 records with a chargecode of 5 for
1/1999
and that tenant2 has 2 records with chargecode 6 for 2/1999SELECT tenantID, chargecode, date, count(*) as Duplciates
FROM TheTable
GROUP BY tenantID, chargecode, date
HAVING count(*) > 1
Roy Harvey
Beacon Falls, CT
On Thu, 22 Feb 2007 16:19:00 -0800, Joe S. <joecrew@.news.postalias>
wrote:
>My table looks like this:
>rowid, tenantid, chargecode, date, amount, ..........
>
>I need to get all rows that have multiple entries on the same date(mm/yyyy)
>for a given combination of tenantID and chargecode.
>for example:
>1 tenant1 5 1/1999 $500
>2 tenant1 6 1/1999 $25
>3 tenant1 5 1/1999 $35
>4 tenant2 5 1/1999 $30
>5 tenant2 5 2/1999 $30
>6 tenant2 6 2/1999 $40
>7 tenant2 6 2/1999 $50
>I would need to get that tenant1 has 2 records with a chargecode of 5 for
>1/1999
>and that tenant2 has 2 records with chargecode 6 for 2/1999
Find Records with duplicate values in 1 row keyed to 2 other rows
My table looks like this:
rowid, tenantid, chargecode, date, amount, ..........
I need to get all rows that have multiple entries on the same date(mm/yyyy)
for a given combination of tenantID and chargecode.
for example:
1 tenant1 5 1/1999 $500
2 tenant1 6 1/1999 $25
3 tenant1 5 1/1999 $35
4 tenant2 5 1/1999 $30
5 tenant2 5 2/1999 $30
6 tenant2 6 2/1999 $40
7 tenant2 6 2/1999 $50
I would need to get that tenant1 has 2 records with a chargecode of 5 for
1/1999
and that tenant2 has 2 records with chargecode 6 for 2/1999SELECT tenantID, chargecode, date, count(*) as Duplciates
FROM TheTable
GROUP BY tenantID, chargecode, date
HAVING count(*) > 1
Roy Harvey
Beacon Falls, CT
On Thu, 22 Feb 2007 16:19:00 -0800, Joe S. <joecrew@.news.postalias>
wrote:
>My table looks like this:
>rowid, tenantid, chargecode, date, amount, ..........
>
>I need to get all rows that have multiple entries on the same date(mm/yyyy)
>for a given combination of tenantID and chargecode.
>for example:
>1 tenant1 5 1/1999 $500
>2 tenant1 6 1/1999 $25
>3 tenant1 5 1/1999 $35
>4 tenant2 5 1/1999 $30
>5 tenant2 5 2/1999 $30
>6 tenant2 6 2/1999 $40
>7 tenant2 6 2/1999 $50
>I would need to get that tenant1 has 2 records with a chargecode of 5 for
>1/1999
>and that tenant2 has 2 records with chargecode 6 for 2/1999
rowid, tenantid, chargecode, date, amount, ..........
I need to get all rows that have multiple entries on the same date(mm/yyyy)
for a given combination of tenantID and chargecode.
for example:
1 tenant1 5 1/1999 $500
2 tenant1 6 1/1999 $25
3 tenant1 5 1/1999 $35
4 tenant2 5 1/1999 $30
5 tenant2 5 2/1999 $30
6 tenant2 6 2/1999 $40
7 tenant2 6 2/1999 $50
I would need to get that tenant1 has 2 records with a chargecode of 5 for
1/1999
and that tenant2 has 2 records with chargecode 6 for 2/1999SELECT tenantID, chargecode, date, count(*) as Duplciates
FROM TheTable
GROUP BY tenantID, chargecode, date
HAVING count(*) > 1
Roy Harvey
Beacon Falls, CT
On Thu, 22 Feb 2007 16:19:00 -0800, Joe S. <joecrew@.news.postalias>
wrote:
>My table looks like this:
>rowid, tenantid, chargecode, date, amount, ..........
>
>I need to get all rows that have multiple entries on the same date(mm/yyyy)
>for a given combination of tenantID and chargecode.
>for example:
>1 tenant1 5 1/1999 $500
>2 tenant1 6 1/1999 $25
>3 tenant1 5 1/1999 $35
>4 tenant2 5 1/1999 $30
>5 tenant2 5 2/1999 $30
>6 tenant2 6 2/1999 $40
>7 tenant2 6 2/1999 $50
>I would need to get that tenant1 has 2 records with a chargecode of 5 for
>1/1999
>and that tenant2 has 2 records with chargecode 6 for 2/1999
Find out if a row is subtotal or not?
How do I check if a row is a subtotal or not? I have a report where one
column shows Average hour rate. Right now, the subtotal line shows the
Average column as a subtotal of all the other averages... So I'd like to
have a statement that does something like
if row is a subtitle row then
show nothing (or something else)
else
show the Average value
end if
I bet it's trivial, but I can't figure it out. Please help. :)
Kaisa M. LindahlDid you look at the InScope function? It will allow you to distinguish
between cells in subtotals and cells in the groupings. More information on
InScope is available at:
http://msdn.microsoft.com/library/en-us/RSCREATE/htm/rcr_creating_expressions_v1_0jmt.asp
E.g. =iif(InScope("ColumnGroup"), Avg(Fields!F1.Value), Nothing)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kaisa M. Lindahl" <kaisaml@.hotmail.com> wrote in message
news:%23glpsPj1EHA.3468@.TK2MSFTNGP14.phx.gbl...
> How do I check if a row is a subtotal or not? I have a report where one
> column shows Average hour rate. Right now, the subtotal line shows the
> Average column as a subtotal of all the other averages... So I'd like to
> have a statement that does something like
> if row is a subtitle row then
> show nothing (or something else)
> else
> show the Average value
> end if
> I bet it's trivial, but I can't figure it out. Please help. :)
> Kaisa M. Lindahl
>
column shows Average hour rate. Right now, the subtotal line shows the
Average column as a subtotal of all the other averages... So I'd like to
have a statement that does something like
if row is a subtitle row then
show nothing (or something else)
else
show the Average value
end if
I bet it's trivial, but I can't figure it out. Please help. :)
Kaisa M. LindahlDid you look at the InScope function? It will allow you to distinguish
between cells in subtotals and cells in the groupings. More information on
InScope is available at:
http://msdn.microsoft.com/library/en-us/RSCREATE/htm/rcr_creating_expressions_v1_0jmt.asp
E.g. =iif(InScope("ColumnGroup"), Avg(Fields!F1.Value), Nothing)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kaisa M. Lindahl" <kaisaml@.hotmail.com> wrote in message
news:%23glpsPj1EHA.3468@.TK2MSFTNGP14.phx.gbl...
> How do I check if a row is a subtotal or not? I have a report where one
> column shows Average hour rate. Right now, the subtotal line shows the
> Average column as a subtotal of all the other averages... So I'd like to
> have a statement that does something like
> if row is a subtitle row then
> show nothing (or something else)
> else
> show the Average value
> end if
> I bet it's trivial, but I can't figure it out. Please help. :)
> Kaisa M. Lindahl
>
Sunday, February 26, 2012
find changed columns in a new version of a row
Hi,
I wanted to know if theres a fast way of knowing which columns in a row have
changed.
I have 2 versions of rows-old and new.
My table is as follows
CPK compositePrimaryKey--combination of 3 keys
versionID uniquidentifier
param1 int
param1 char(10),
param3 bit
--etc
my new and old records are as follows
old version
---
composite primary key guid param1 param2 param3
CPK1 GUID1 4 'enabled' 0
CPK2 GUID2 5 'disabled' 0
CPK3 GUID3 6 'enabled' 1
new version
---
composite primary key guid param1 param2 param3
CPK1 GUID1 10 'enabled' 1
/*changed record*/
CPK2 GUID2 5 'disabled' 0
CPK3 GUID3 6 'disabled' 0
/*changed record*/
I am using binary_checksum(*) to find which ROWS are new ie they have
changed but I need to know which COLUMNS in each of those rows have
changed.There are about 100 columns in each row out of which only 3-4 will
change .
I am using SQL Server 2000.
Is there a way to extract just column names from each row that has changed?
Thanks.
tech77 (
tech77@.discussions.microsoft.com) writes:
> I am using binary_checksum(*) to find which ROWS are new ie they have
> changed but I need to know which COLUMNS in each of those rows have
> changed.There are about 100 columns in each row out of which only 3-4 will
> change .
> I am using SQL Server 2000.
> Is there a way to extract just column names from each row that has
> changed?
No, you need compare column by column.
And using binary_checksum is risky. Two versions of the same row could
have different checksum.
Are you doing this in a trigger? Is the purpose for auditing? Which
version of SQL Server are you using?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||thanks for the reply.i m using sql 2000 and i m not doing this in a trigger.
I
am doing it in a procedure and all these calculations are done in a temp
table for the purpose of auditing changes to a column.
"Erland Sommarskog" wrote:
>
tech77 (
tech77@.discussions.microsoft.com) writes:
> No, you need compare column by column.
> And using binary_checksum is risky. Two versions of the same row could
> have different checksum.
> Are you doing this in a trigger? Is the purpose for auditing? Which
> version of SQL Server are you using?
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx
>|||
Erland Sommarskog wrote:
>
tech77 (
tech77@.discussions.microsoft.com) writes:
>
>No, you need compare column by column.
>And using binary_checksum is risky. Two versions of the same row could
>have different checksum.
>
Erland,
I think you meant to say that two *different* versions of a row could
have the *same* checksum. You're correct to say that checksums are not
completely reliable indicators of change.
Steve Kass
Drew University
>Are you doing this in a trigger? Is the purpose for auditing? Which
>version of SQL Server are you using?
>
>|||
tech77 (
tech77@.discussions.microsoft.com) writes:
> thanks for the reply.i m using sql 2000 and i m not doing this in a
> trigger.I am doing it in a procedure and all these calculations are done
> in a temp table for the purpose of auditing changes to a column.
OK, you still need to run column by column. And avoid binary_checksum
or checksum.
Had you been on SQL 2005, I could have suggested alternative solutions.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||hi Thanks Erland and Steve for all your replies.
We would be migrating to SQL 2005 soon.Can you please tellme how to do it in
SQL 2005?
Thanks.
"Steve Kass" wrote:
>
> Erland Sommarskog wrote:
>
> Erland,
> I think you meant to say that two *different* versions of a row could
> have the *same* checksum. You're correct to say that checksums are not
> completely reliable indicators of change.
> Steve Kass
> Drew University
>
>|||
tech77 (
tech77@.discussions.microsoft.com) writes:
> We would be migrating to SQL 2005 soon.Can you please tellme how to do
> it in SQL 2005?
One thing I was thinking of was hash_bytes(), which should be more reliable
than binary_checksum. But I forgot that it does not operate on the entire
like binary_checksum(*) does, so it's probably not that useful.
However, what is an interesting option for auditing on SQL 2005, is the
XML datatype. The idea would be to have a single table where you would
have tablename, keyvalue1 and keyvalue2, and an after-image of the data
in an XML column. Then you would have regular audting columns as
when, whom, application and host name.
The trigger code can be standardised and you could write a simple
application that reads the log row a combination ot tablename and keyvalue,
and presented you for each entry what have canged. What's really
here, is that the presenting application would not need have much
knowledge about the data it presents.
This is an idea that we discussed in our shop when a prospective customer
asked for better auditing than we have today. The customer dropped out
for other reasons, but we might implement the idea when we come to
SQL 2005.
Of course, this method would not be very space-effective, but we prefer
this since developing and maintaining space-effective auditing is
tedious and expensive.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Hi Erland,
I am using SQL Server express edition now.Can you please suggest your
alternative solution for finding changed row and columns?
Thanks.
"Erland Sommarskog" wrote:
>
tech77 (
tech77@.discussions.microsoft.com) writes:
> OK, you still need to run column by column. And avoid binary_checksum
> or checksum.
> Had you been on SQL 2005, I could have suggested alternative solutions.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx
>|||
tech77 (
tech77@.discussions.microsoft.com) writes:
> I am using SQL Server express edition now.Can you please suggest your
> alternative solution for finding changed row and columns?
I think that what I had in mind was to store an xml document of the
row in a table. This is possible, because in SQL 2005, you can use
FOR XML and receive the result in a variable or column of the xml data
type.
The audit table would look something like this:
CREATE TABLE audits (
ident int IDENTITY,
tablename sysname NOT NULL,
keyvalue1 sql_variant NOT NULL,
keyvalue2 sql_variant NULL,
moduser1 sysname NOT NULL
CONSTRAINT default_audit_moduser1 original_login(),
moduser2 sysname NOT NULL
CONSTRAINT default_audit_moduser2 SYSTEM_USER,
moddate datetime NOT NULL
CONSTRAINT default_audit_moddate getdate(),
hostname sysname NULL,
CONSTRAINT default_audit_hostname host_name(),
afterimage xml NULL,
CONSTRAINT pk_audit PRIMARY KEY (ident))
Thus, one table would hold the audit for many tables. The idea is that
if you could write a general application that reads all rows for a certain
table and keyvalue. The application would compare the XML documents, and
the present just the difference. The beauty of it is that the application
itself would not need to have any knowledge of the data model as such.
Some notes on details:
o The table only handle two-column keys. The assumption is that if you
have more columns in the key, they could still pass as the same record.
But add more keyvalue columns as needed.
o SYSTEM_USER and original_login() returns the same value in most cases,
but if EXECUTE AS is in force, they can yield different results.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
I wanted to know if theres a fast way of knowing which columns in a row have
changed.
I have 2 versions of rows-old and new.
My table is as follows
CPK compositePrimaryKey--combination of 3 keys
versionID uniquidentifier
param1 int
param1 char(10),
param3 bit
--etc
my new and old records are as follows
old version
---
composite primary key guid param1 param2 param3
CPK1 GUID1 4 'enabled' 0
CPK2 GUID2 5 'disabled' 0
CPK3 GUID3 6 'enabled' 1
new version
---
composite primary key guid param1 param2 param3
CPK1 GUID1 10 'enabled' 1
/*changed record*/
CPK2 GUID2 5 'disabled' 0
CPK3 GUID3 6 'disabled' 0
/*changed record*/
I am using binary_checksum(*) to find which ROWS are new ie they have
changed but I need to know which COLUMNS in each of those rows have
changed.There are about 100 columns in each row out of which only 3-4 will
change .
I am using SQL Server 2000.
Is there a way to extract just column names from each row that has changed?
Thanks.


> I am using binary_checksum(*) to find which ROWS are new ie they have
> changed but I need to know which COLUMNS in each of those rows have
> changed.There are about 100 columns in each row out of which only 3-4 will
> change .
> I am using SQL Server 2000.
> Is there a way to extract just column names from each row that has
> changed?
No, you need compare column by column.
And using binary_checksum is risky. Two versions of the same row could
have different checksum.
Are you doing this in a trigger? Is the purpose for auditing? Which
version of SQL Server are you using?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||thanks for the reply.i m using sql 2000 and i m not doing this in a trigger.
I
am doing it in a procedure and all these calculations are done in a temp
table for the purpose of auditing changes to a column.
"Erland Sommarskog" wrote:
>


> No, you need compare column by column.
> And using binary_checksum is risky. Two versions of the same row could
> have different checksum.
> Are you doing this in a trigger? Is the purpose for auditing? Which
> version of SQL Server are you using?
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx
>|||
Erland Sommarskog wrote:
>


>
>No, you need compare column by column.
>And using binary_checksum is risky. Two versions of the same row could
>have different checksum.
>
Erland,
I think you meant to say that two *different* versions of a row could
have the *same* checksum. You're correct to say that checksums are not
completely reliable indicators of change.
Steve Kass
Drew University
>Are you doing this in a trigger? Is the purpose for auditing? Which
>version of SQL Server are you using?
>
>|||


> thanks for the reply.i m using sql 2000 and i m not doing this in a
> trigger.I am doing it in a procedure and all these calculations are done
> in a temp table for the purpose of auditing changes to a column.
OK, you still need to run column by column. And avoid binary_checksum
or checksum.
Had you been on SQL 2005, I could have suggested alternative solutions.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||hi Thanks Erland and Steve for all your replies.
We would be migrating to SQL 2005 soon.Can you please tellme how to do it in
SQL 2005?
Thanks.
"Steve Kass" wrote:
>
> Erland Sommarskog wrote:
>
> Erland,
> I think you meant to say that two *different* versions of a row could
> have the *same* checksum. You're correct to say that checksums are not
> completely reliable indicators of change.
> Steve Kass
> Drew University
>
>|||


> We would be migrating to SQL 2005 soon.Can you please tellme how to do
> it in SQL 2005?
One thing I was thinking of was hash_bytes(), which should be more reliable
than binary_checksum. But I forgot that it does not operate on the entire
like binary_checksum(*) does, so it's probably not that useful.
However, what is an interesting option for auditing on SQL 2005, is the
XML datatype. The idea would be to have a single table where you would
have tablename, keyvalue1 and keyvalue2, and an after-image of the data
in an XML column. Then you would have regular audting columns as
when, whom, application and host name.
The trigger code can be standardised and you could write a simple
application that reads the log row a combination ot tablename and keyvalue,
and presented you for each entry what have canged. What's really

here, is that the presenting application would not need have much
knowledge about the data it presents.
This is an idea that we discussed in our shop when a prospective customer
asked for better auditing than we have today. The customer dropped out
for other reasons, but we might implement the idea when we come to
SQL 2005.
Of course, this method would not be very space-effective, but we prefer
this since developing and maintaining space-effective auditing is
tedious and expensive.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Hi Erland,
I am using SQL Server express edition now.Can you please suggest your
alternative solution for finding changed row and columns?
Thanks.
"Erland Sommarskog" wrote:
>


> OK, you still need to run column by column. And avoid binary_checksum
> or checksum.
> Had you been on SQL 2005, I could have suggested alternative solutions.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx
>|||


> I am using SQL Server express edition now.Can you please suggest your
> alternative solution for finding changed row and columns?
I think that what I had in mind was to store an xml document of the
row in a table. This is possible, because in SQL 2005, you can use
FOR XML and receive the result in a variable or column of the xml data
type.
The audit table would look something like this:
CREATE TABLE audits (
ident int IDENTITY,
tablename sysname NOT NULL,
keyvalue1 sql_variant NOT NULL,
keyvalue2 sql_variant NULL,
moduser1 sysname NOT NULL
CONSTRAINT default_audit_moduser1 original_login(),
moduser2 sysname NOT NULL
CONSTRAINT default_audit_moduser2 SYSTEM_USER,
moddate datetime NOT NULL
CONSTRAINT default_audit_moddate getdate(),
hostname sysname NULL,
CONSTRAINT default_audit_hostname host_name(),
afterimage xml NULL,
CONSTRAINT pk_audit PRIMARY KEY (ident))
Thus, one table would hold the audit for many tables. The idea is that
if you could write a general application that reads all rows for a certain
table and keyvalue. The application would compare the XML documents, and
the present just the difference. The beauty of it is that the application
itself would not need to have any knowledge of the data model as such.
Some notes on details:
o The table only handle two-column keys. The assumption is that if you
have more columns in the key, they could still pass as the same record.
But add more keyvalue columns as needed.
o SYSTEM_USER and original_login() returns the same value in most cases,
but if EXECUTE AS is in force, they can yield different results.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
Subscribe to:
Posts (Atom)