Showing posts with label below. Show all posts
Showing posts with label below. Show all posts

Thursday, March 29, 2012

Finding if a REAL is LIKE %[49]9

Timings... sometimes there are almost too many ways to do the same thing.

The only significant findings I see from all the below timings is:

1) Integer math is generally fastest, naturally. Bigint math isn't much
slower, for integers that all fit within an integer.

2) Converting float to varchar is relatively slow, and should be avoided if
possible. Converting from integer to varchar or varchar to int is several
times faster.

3) Most significantly, and less obvious, CASE expr WHEN ... recomputes expr
for each WHEN condition, unfortunately, and is the same speed (or perhaps
slightly slower) as listing WHEN expr = value for each condition. Perhaps an
indexed computed column (somehow materialized) would be advisable when
possible to avoid repeated computations in CASE..WHEN expressions (if that
helps..).

Note that if you divide by COUNT(*), most timings below are below one
microsecond per row, so this all may not be very significant in most
applications, unless you do frequent aggregations of some sort.

COUNT(*) FROM [my_sf_table] = 477446 rows

The result from each query = either 47527 or 47527.0

Platform: Athlon 2000 XP w/512MB RAM, table seems to be cached in RAM, SQL
2000, all queries run at least 3 times and minimum timings shown (msec).

SRP is a REAL (4 bytes)

Fastest ones are near the end.

CPU SQL
(ms)
-- Convert to varchar (implicitly) and compare right two digits
-- (original version -- no I didn't write it)
4546 select sum(case right(srp,2)
when '99' then 1 when '49' then 1 else 0 end)
from sf

-- Use LIKE for a single comparison instead of two, much faster
-- Note that the big speedup indicates that
-- CASE expr WHEN y then a WHEN z then b .
-- recalculates expr for each WHEN clause
2023 select sum(case when srp like '%[49]9' then 1 else 0 end)
from sf

-- Floating point method of taking a modulus (lacking fmod/modf)
2291 select sum(case round(srp - 100e*floor(srp*.01e),0)
when 99 then 1 when 49 then 1 else 0 end)
from sf

-- Round to nearest 50 and compare with 49
1322 select sum(case round(srp-50e*floor(srp*.02e),0)
when 49 then 1 else 0 end)
from sf

-- Divide by 49 by multiplying by (slightly larger than) 1e/49e
811 select sum(floor((cast(srp as integer)%50)*2.04082E-2))
from sf

-- Integer approach without using CASE
731 select sum(coalesce(nullif(sign(cast(srp as
integer)%50-48),-1),0))
from sf

-- My original integer approach
651 select sum(case cast(srp as integer)%100
when 99 then 1 when 49 then 1 else 0 end)
from sf

-- Modulus 50 integer approach without CASE
481 select sum((cast(srp as integer)%50)/49)
from sf

-- Modulus 50 integer approach
460 select sum(case cast(srp as integer)%50
when 49 then 1 else 0 end)
from sf

-- bigint without CASE
531 select sum((cast(srp as bigint)%50)/49)
from sf

-- bigint with CASE
521 select sum(case cast(srp as bigint)%50
when 49 then 1 else 0 end)
from sf

-- get SIGN to return -1 or 0, then add 1
-- much better than the coalesce+nullif approach
500 select sum(sign(cast(srp as integer)%50-49)+1)
from sf

-- SIGN with BIGINT
551 select sum(sign(cast(srp as bigint)%50-49)+1)
from sf

BTW, I know srp should be int to begin with for this to be faster... Okay,
so...

select cast(srp as int) srp into sf from [my_real_sf_table]

720 select sum(case when srp like '%[49]9' then 1 else 0 end) from sf

339 select sum(1+sign(srp%50-49)) from sf
310 select sum(srp%50/49) from sf
300 select sum(case srp%50 when 49 then 1 else 0 end) from sf

What if it were a char(7)?

select cast(cast(srp as integer) as char(7)) srp into sf2 from
[my_sf_table]

801 select sum(case right(rtrim(srp),2) when '49' then 1
when '99' then 1 else 0 end) from sf2
717 select sum(case when srp like '%[49]9' then 1 else 0 end) from sf2
405 select sum(srp%50/49) from sf2
391 select sum(case srp%50 when 49 then 1 else 0 end) from sf2

How about varchar(7)?

drop table sf2
select cast(cast(srp as integer) as varchar(7)) srp into sf2 from
[my_sf_table]

581 select sum(case right(srp,2) when '49' then 1
when '99' then 1 else 0 end) from sf2
569 select sum(case when srp like '%[49]9' then 1 else 0 end) from sf2

LIKE is faster on VARCHAR than on CHAR columns...
Apparently it has to effectively RTRIM the trailing spaces during the LIKE
operation.

Is binary collation any faster?

drop table sf2
select cast(cast(srp as integer) as varchar(7))
COLLATE Latin1_General_BIN srp
into sf2 from tbl_superfile

561 select sum(case right(srp,2) when '49' then 1
when '99' then 1 else 0 end) from sf2
530 select sum(case when srp like '%[49]9' then 1 else 0 end) from sf2

Binary collation comparisons are slightly faster, though it's not a big
difference (with just two characters being compared).

662 select sum(case convert(binary(2),right(srp,2))
when 0x3439 then 1 when 0x3939 then 1 else 0 end) from sf2

----

5037 select right(srp,2) srp,count(*) from my_sf_table
group by right(srp,2)
order by right(srp,2)

920 select cast(srp as int)%100 srp,count(*) from my_sf_table
group by cast(srp as int)%100
order by cast(srp as int)%100

--

On the one hand, premature optimization can be a waste of time and energy.
On the other hand, understanding performance implications of various
operations can help write more efficient systems.

In any case, an indexed computed column or one updated on a trigger could
virtually eliminate the need for any of these calculations to be performed,
except upon insertion or update, so maybe my comparisons aren't very
meaningful for most applications, considering we're talking about less than
3 microseconds per row here worst-case.

But the results remind me, some recommend avoiding Identity when it's not
necessary. I find Identity(int,1,1) to be a nice, compact surrogate key that
is useful for quick comparisons, grouping, etc, and so on. Also, it seems
most appropriate as the primary key to all lookup tables in a star schema in
OLAP data warehousing. (?) Of course, in some situations, it's not
appropriate, particularly when having a surrogate key violates data
integrity by allowing duplicates that would not be allowed with a proper
primary key constraint, or when the surrogate key is completely redundant
with (especially a short) single-column unique key value that would be a
better selection as the primary key. With multi-column primary keys, I think
it's sometimes convenient to have a surrogate Identity if only for IN
clauses that reference that identity column (though EXISTS can usually
replace those, so maybe that's a weak excuse for an extra column.)Aaron W. West (tallpeak@.hotmail.NO.SPAM) writes:
> CPU SQL
> (ms)
> -- Convert to varchar (implicitly) and compare right two digits
> -- (original version -- no I didn't write it)
> 4546 select sum(case right(srp,2)
> when '99' then 1 when '49' then 1 else 0 end)
> from sf
> -- Use LIKE for a single comparison instead of two, much faster
> -- Note that the big speedup indicates that
> -- CASE expr WHEN y then a WHEN z then b .
> -- recalculates expr for each WHEN clause
> 2023 select sum(case when srp like '%[49]9' then 1 else 0 end)
> from sf

I tried some variations of this, and indeed it seems that there is a cost
when the expression appears with several WITH clauses. I tried a variation
of this, where I supplemented the test table with a char(2) column, so I
could factor out that the WITH clauses themselves were not the culprits.

CREATE TABLE realthing (realta real NOT NULL,
lasttwo char(2) NOT NULL)
go
INSERT realthing (realta, lasttwo)
SELECT r, right(r, 2)
FROM (SELECT r = convert(real, checksum(newid()))
FROM Northwind..Orders a
CROSS JOIN Northwind..Orders b) AS f
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE right(realta, 2)
WHEN '99' THEN 1
WHEN '49' THEN 1
WHEN '39' THEN 1
ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 20766 ms.
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE WHEN right(realta, 2) LIKE '[349]9' THEN 1 ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 8406 ms.
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE lasttwo
WHEN '99' THEN 1
WHEN '49' THEN 1
WHEN '39' THEN 1
ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 920 ms.
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE WHEN lasttwo LIKE '[349]9' THEN 1 ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 1466 ms.

Thus, when using the char(2) column LIKE is slower despite that there
is only one WHEN condition. So indeed it seems that right(realta, 2)
is computed thrice in the first test.

Another funny thing is the actual results from the queries - they are
different. When I ran:

select count(*) from realthing where lasttwo <> right(realta, 2)

The result was about half of the size of realthing! I can't see that
this difference affects the results though.

Now, your article had a lot more tests, but I have to confess that
you lost me quite early, because you never discussed what is the
actual problem. Since you are working with floating-poiont numbers
there is a great risk that different methods not only has different
execution times, but also gives different results.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks for reminding me... I should always start a post with a description
of the problem.

An existing trigger updates the column tbl_sales.aors to A if a price (SRP,
in pennies) ends in other than 49 or 99 (cents), and S otherwise. In this
case, the problem was largely in my head. That is, I didn't "like" the
existing UPDATE statement, and wanted to see if I could make it more
efficient. My first solution was near to the best, convert real to integer
and take the modulus...

Here, I'll start over, but with a significantly different problem, and one
that is a real problem, that of denormalization vs normalization:

Currently, I have a sort of data warehouse table of sales data. It stores
the date and time of each sale, client_id, UPC, and other information, but
no quantity, and no price. The quantity is assumed to be 1, and multiple
sales are stored as multiple records.

During import of a sales data file, many fields are populated from a sort of
product description table, in an insert trigger. The original author
apparently thought denormalization of the product data into the sales table
would be better for query performance. I don't think so, but am not really
sure.

There's a potential backfill issue with the current denormalized data
structure. After product information changes, or new product information
comes in, I have to backfill all the sales records missing the changed data.
I think it goes without saying that it's bad to have to backfill data just
because of denormalization.

1) Should I completely remove all redundant information from this table?

2) If I remove some fields and not others, which ones should I remove? (or
how do I decide which ones not to remove ... ones that are least likely to
change are most likely to be in GROUP BY clauses?)

(I'm thinking I should rename the table, remove all I can, create a view to
expose the lookup fields, and then work on any resulting performance issues
I may have...)

I've mostly decided to remove the redundant descriptive fields (artist_name,
title, label_name, etc), and replace them with title_id, but am unsure
whether to eliminate fields such as genre, market_id, region_id, and others
used in GROUP BY queries.

(I should probably be using MS OLAP to aggregate the data, but haven't yet
put all the data in the proper form for this... adding title_id should help,
I think... now need a table of dates, a table of weeks, a table of chains, a
table of coalitions, and other dimension tables.. which reminds me,
client_ID can be chain_ID or coalition_ID, which is a violation of 1NF, so
I've read... I could create views that expose the chains and coalitions, and
wonder if that would be a reasonable solution or if I should break the table
into four tables for the different types of clients...)

Joining two or three tables doesn't seem like it should be a big expense,
when the lookup tables are no more than a few hundred thousand rows for the
largest one (the information on products), for example... if I have the
proper indexes. But I think GROUP BY would be a little slower if I added any
columns from the other tables to my GROUP BY, so maybe a little
denormalization of data that rarely changes (a client is unlikely to change
its chain_ID) isn't a bad idea. But any denormalization should include
constraints so that if the data changes in the base table, it is
automatically changed in the sales table (Foreign key... add the constraint
REFERENCES.. ON UPDATE CASCADE to the chain_id of the referenced table, and
make sure the column is indexed?)

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns951BD21B14996Yazorman@.127.0.0.1...
Aaron W. West (tallpeak@.hotmail.NO.SPAM) writes:
> CPU SQL
> (ms)
> -- Convert to varchar (implicitly) and compare right two digits
> -- (original version -- no I didn't write it)
> 4546 select sum(case right(srp,2)
> when '99' then 1 when '49' then 1 else 0 end)
> from sf
> -- Use LIKE for a single comparison instead of two, much faster
> -- Note that the big speedup indicates that
> -- CASE expr WHEN y then a WHEN z then b .
> -- recalculates expr for each WHEN clause
> 2023 select sum(case when srp like '%[49]9' then 1 else 0 end)
> from sf

I tried some variations of this, and indeed it seems that there is a cost
when the expression appears with several WITH clauses. I tried a variation
of this, where I supplemented the test table with a char(2) column, so I
could factor out that the WITH clauses themselves were not the culprits.

CREATE TABLE realthing (realta real NOT NULL,
lasttwo char(2) NOT NULL)
go
INSERT realthing (realta, lasttwo)
SELECT r, right(r, 2)
FROM (SELECT r = convert(real, checksum(newid()))
FROM Northwind..Orders a
CROSS JOIN Northwind..Orders b) AS f
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE right(realta, 2)
WHEN '99' THEN 1
WHEN '49' THEN 1
WHEN '39' THEN 1
ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 20766 ms.
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE WHEN right(realta, 2) LIKE '[349]9' THEN 1 ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 8406 ms.
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE lasttwo
WHEN '99' THEN 1
WHEN '49' THEN 1
WHEN '39' THEN 1
ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 920 ms.
go
DBCC DROPCLEANBUFFERS
go
DECLARE @.start datetime
SELECT @.start = getdate()
SELECT SUM(CASE WHEN lasttwo LIKE '[349]9' THEN 1 ELSE 0 END)
FROM realthing
SELECT datediff(ms, @.start, getdate()) -- 1466 ms.

Thus, when using the char(2) column LIKE is slower despite that there
is only one WHEN condition. So indeed it seems that right(realta, 2)
is computed thrice in the first test.

Another funny thing is the actual results from the queries - they are
different. When I ran:

select count(*) from realthing where lasttwo <> right(realta, 2)

The result was about half of the size of realthing! I can't see that
this difference affects the results though.

Now, your article had a lot more tests, but I have to confess that
you lost me quite early, because you never discussed what is the
actual problem. Since you are working with floating-poiont numbers
there is a great risk that different methods not only has different
execution times, but also gives different results.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Aaron W. West (tallpeak@.hotmail.NO.SPAM) writes:
> An existing trigger updates the column tbl_sales.aors to A if a price
> (SRP, in pennies) ends in other than 49 or 99 (cents), and S otherwise.
> In this case, the problem was largely in my head. That is, I didn't
> "like" the existing UPDATE statement, and wanted to see if I could make
> it more efficient. My first solution was near to the best, convert real
> to integer and take the modulus...

If you have the price as real, I would be more worried that a price
which exactly is 76.99 internally is represented as 76.989998 and the
gets handled as 76.98.

> During import of a sales data file, many fields are populated from a
> sort of product description table, in an insert trigger. The original
> author apparently thought denormalization of the product data into the
> sales table would be better for query performance. I don't think so, but
> am not really sure.
> There's a potential backfill issue with the current denormalized data
> structure. After product information changes, or new product information
> comes in, I have to backfill all the sales records missing the changed
> data. I think it goes without saying that it's bad to have to backfill
> data just because of denormalization.

I would not say that I like this arrangement. Then again, I work only
with OLTP databases, and I have understand that in data warehouses
denormalization is customary.

It is difficult to say what would be the best in your case. On the one
hand, the denormalization appears to cause some problems when you update.
On the other, the denormalization may pay back when you actually query
in the inforamation. Actually benchmarking is probably the only way to
find out. The catch may be that the size of the data is such that it
does not really lend itself to massive experimenting.

The group microsoft.public.sqlserver.datawarehouse is likely to
be frequented by people with more experience of dilemmas like this one.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||SRP is declared as a REAL, but contains integer data (pennies). I just
haven't bothered to drop indexes referring to SRP and alter the column to
integer yet... but in any case, integer data seems to be always represented
exactly in a REAL, until it's up to the limit of representation of REAL...
which is 2^24, because the mantissa is 24 bits in a REAL.

----

if you care...

Three ways of proving that fact...

-- This finds the first power of 2 + 1 which is not representable accurately
as a REAL
declare @.i int, @.r real
set @.i = 2
while @.i < 2001002003
begin
if (cast(@.i as float)<>cast(@.i as real))
begin
print @.i
break
end
set @.i=((@.i&-2)*2)|1
end

--outputs 16777217

-- Same without a loop:
SELECT MIN(Number)
FROM Numbers
WHERE Number<31 AND CAST(POWER(2,Number)+1 AS REAL)
<> CAST(POWER(2,Number)+1 AS FLOAT)

----
24

(1 row(s) affected)

Or, given a table of numbers 1 to 1000 at least:

SELECT MIN(M*1000000+T*1000+U)-1 MaxExactRealInteger
FROM
(SELECT TOP 33 Number-1 M FROM Numbers) AS M,
(SELECT TOP 1000 Number-1 T FROM Numbers) AS T,
(SELECT TOP 1000 Number-1 U FROM Numbers) AS U
WHERE CAST(M*1000000+T*1000+U AS FLOAT) <>
CAST(M*1000000+T*1000+U AS REAL)

MaxExactRealInteger
------
16777216

(1 row(s) affected)

Which again, is POWER(2,24)

Takes around a half minute to run...

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns951C95812A564Yazorman@.127.0.0.1...
Aaron W. West (tallpeak@.hotmail.NO.SPAM) writes:
> An existing trigger updates the column tbl_sales.aors to A if a price
> (SRP, in pennies) ends in other than 49 or 99 (cents), and S otherwise.
> In this case, the problem was largely in my head. That is, I didn't
> "like" the existing UPDATE statement, and wanted to see if I could make
> it more efficient. My first solution was near to the best, convert real
> to integer and take the modulus...

If you have the price as real, I would be more worried that a price
which exactly is 76.99 internally is represented as 76.989998 and the
gets handled as 76.98.

...sql

Friday, March 23, 2012

Finding a creator of object

Hi all,

whenever dbo is prifixed with the create script the owner becomes dbo.
If the below script
run by the sam the owner becomes dbo

create proc dbo.test
as
print 'hello'

Is there any place where SQL server keeps the record of creator?shiju (shiju.samuel@.gmail.com) writes:

Quote:

Originally Posted by

whenever dbo is prifixed with the create script the owner becomes dbo.
If the below script
run by the sam the owner becomes dbo
>
create proc dbo.test
as
print 'hello'
>
Is there any place where SQL server keeps the record of creator?


No. You would have to have trace running that captutes the Object:Created
event. In SQL 2005 you could also set up a DLL trigger.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Wednesday, March 21, 2012

Find the first day the account passes a threshold

Using daily transactions, I need to find the first day an account deposits
more than 20 dollars (+ or -)
please see the DDL below, a sql statement, and the desired output. Thanks
create table #bankaccounts
(
accountid int not null
, transdate datetime not null
, primary key (accountid, transdate)
, amttran decimal(19,2) not null
)
set nocount on
insert #bankaccounts
values (1, '20060101', 10)
insert #bankaccounts
values (1, '20060102', 5)
insert #bankaccounts
values (1, '20060103', 12)
insert #bankaccounts
values (1, '20060106', 15.50)
insert #bankaccounts
values (1, '20060107', 13)
insert #bankaccounts
values (2, '20060101', 15)
insert #bankaccounts
values (2, '20060102', -6)
insert #bankaccounts
values (2, '20060103', 2)
insert #bankaccounts
values (2, '20060106', 2.75)
insert #bankaccounts
values (2, '20060107', 4)
insert #bankaccounts
values (2, '20060111', 5)
insert #bankaccounts
values (2, '20060116', 9)
insert #bankaccounts
values (3, '20060115', 7)
insert #bankaccounts
values (3, '20060116', 8)
insert #bankaccounts
values (3, '20060122', 9)
insert #bankaccounts
values (3, '20060126', 10)
insert #bankaccounts
values (4, '20060108', 9)
insert #bankaccounts
values (4, '20060112', 10)
insert #bankaccounts
values (5, '20060107', -16)
insert #bankaccounts
values (5, '20060108', 3)
insert #bankaccounts
values (5, '20060109', -12)
insert #bankaccounts
values (5, '20060111', -16)
select accountid
, sum(amttran) [sum]
, min(transdate) [mindate]
, max(transdate) [maxdate]
, ' date over the 20 dollar threshold ' [thresholddate]
from #bankaccounts
group by accountid
having abs(sum(amttran)) > 20
1 55.50 2006-01-01 2006-01-07 2006-01-03
2 31.75 2006-01-01 2006-01-16 2006-01-11
3 34.00 2006-01-15 2006-01-26 2006-01-22
5 -41.00 2006-01-07 2006-01-11 2006-01-09Thom,
this works, but can be improved
select accountid
, [sum]
, [mindate]
, [maxdate]
, (
select min(transdate) from #bankaccounts b
where b.accountid = t.accountid
and ((select sum(amttran) from #bankaccounts b1 where b1.transdate
<= b.transdate
and b.accountid = b1.accountid
)>20
or
(select sum(amttran) from #bankaccounts b1 where b1.transdate <=
b.transdate
and b.accountid = b1.accountid
)<-20)
) as thresholddate
from(
select accountid
, sum(amttran) [sum]
, min(transdate) [mindate]
, max(transdate) [maxdate]
from #bankaccounts
group by accountid ) t

Monday, March 19, 2012

Find string in Stored Procedures (sp_executesql)

Trying to find a view that is called from many stored procedures - need name
of view only. Query below does not work. Do not want to execute?
declare @.tblname varchar(120),@.sql nvarchar(4000),@.gettext varchar(8000)
declare mycur cursor for
select name from sysobjects where xtype='P'
open mycur
fetch next from mycur into @.tblname
while @.@.fetch_status=0
begin
select @.sql=N'exec sp_helptext '+@.tblname
EXEC sp_executesql
@.stmt = @.sql,
@.params = N'@.gettext int output',
@.gettext= @.gettext output
if charindex( 'ATVINVSTANDARDVIEW',upper(@.gettext))>0
print @.tblname
fetch next from mycur into @.tblname
end
close mycur
deallocate mycur
Regards,
Jamie
You may want to check out the system table syscomments where the source of
all user procedures are held.
Anith
|||http://databases.aspfaq.com/database/how-do-i-find-a-stored-procedure-containing-text.html
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:96C2F1E4-2672-4DCF-BA4B-EAB8227F44CD@.microsoft.com...
> Trying to find a view that is called from many stored procedures - need
> name
> of view only. Query below does not work. Do not want to execute?
> declare @.tblname varchar(120),@.sql nvarchar(4000),@.gettext varchar(8000)
> declare mycur cursor for
> select name from sysobjects where xtype='P'
> open mycur
> fetch next from mycur into @.tblname
> while @.@.fetch_status=0
> begin
> select @.sql=N'exec sp_helptext '+@.tblname
> EXEC sp_executesql
> @.stmt = @.sql,
> @.params = N'@.gettext int output',
> @.gettext= @.gettext output
> if charindex( 'ATVINVSTANDARDVIEW',upper(@.gettext))>0
> print @.tblname
> fetch next from mycur into @.tblname
> end
> close mycur
> deallocate mycur
>
> --
> Regards,
> Jamie

Find string in Stored Procedures (sp_executesql)

Trying to find a view that is called from many stored procedures - need name
of view only. Query below does not work. Do not want to execute?
---
declare @.tblname varchar(120),@.sql nvarchar(4000),@.gettext varchar(8000)
declare mycur cursor for
select name from sysobjects where xtype='P'
open mycur
fetch next from mycur into @.tblname
while @.@.fetch_status=0
begin
select @.sql=N'exec sp_helptext '+@.tblname
EXEC sp_executesql
@.stmt = @.sql,
@.params = N'@.gettext int output',
@.gettext= @.gettext output
if charindex( 'ATVINVSTANDARDVIEW',upper(@.gettext))>0
print @.tblname
fetch next from mycur into @.tblname
end
close mycur
deallocate mycur
Regards,
JamieYou may want to check out the system table syscomments where the source of
all user procedures are held.
Anith|||http://databases.aspfaq.com/databas...br />
ext.html
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:96C2F1E4-2672-4DCF-BA4B-EAB8227F44CD@.microsoft.com...
> Trying to find a view that is called from many stored procedures - need
> name
> of view only. Query below does not work. Do not want to execute?
> ---
> declare @.tblname varchar(120),@.sql nvarchar(4000),@.gettext varchar(8000)
> declare mycur cursor for
> select name from sysobjects where xtype='P'
> open mycur
> fetch next from mycur into @.tblname
> while @.@.fetch_status=0
> begin
> select @.sql=N'exec sp_helptext '+@.tblname
> EXEC sp_executesql
> @.stmt = @.sql,
> @.params = N'@.gettext int output',
> @.gettext= @.gettext output
> if charindex( 'ATVINVSTANDARDVIEW',upper(@.gettext))>0
> print @.tblname
> fetch next from mycur into @.tblname
> end
> close mycur
> deallocate mycur
>
> --
> Regards,
> Jamie

Find same records with same ID

Hello,

Having trouble describing my problem

I have the table below, and I am trying to retrieve TileIDs that have the same ModelIDs.

ModelID TileID
HP DL380 G3 120v Dual 15400
HP DL380 G3 120v Dual 15400
HP DL380 G3 120v Dual 15400
HP DL380 G3 120v Dual 15400
HP DL380 G3 120v Dual 15400
HP DL380 G3 120v Dual 15400
Sun SF 280R 120v 15401
Sun SF 280R 120v 15401
Sun SF 280R 120v 15401
Sun SF 280R 120v 15401
Lantronix MSS4 15401

So TileID 15400 would be a keeper, since all ModelIDs are the same.

Any help would be appreciated.

ThanksSELECT * FROM myTable99 WHERE TILEID IN (
SELECT TILEID FROM (
SELECT DISCTINCT MODELID, TILEID
FROM myTable99) AS XXX
GROUP BY TILEID
HAVING COUNT(*) = 1)|||unnecessarily complex, brett

try this:select TileID
from daTable
group by TileID
having COUNT(DISTINCT ModelID) = 1|||Brett,r937

Thanks for the fast responses

I tested both and found that Brett's code is complex for a reason...it worked.|||so did mine, i tested it

:)

Monday, March 12, 2012

Find out whether SQL Server services are runing or not - SP4

Is that possible by querying the SQL Server , to find whether the listed

below there services are running or not?

1. SQL Server

2. SQL Agent

3. MSDTC

No. It is best to do this outside of the database. You can use NT command-line utilities or WMI to do this easily.|||So basically you can run some command-line tool in xp_cmdshell and parse its output, but that is rather lame. :)|||

Let me give some ideas on how WMI ot NT Utilities can be used.

For NT Utilities command are referring command "sc".

Basically I have 100 boxes running with SQL Servers. I want myself to be get alerted when any of the SQL Server instances services stopped.

As all these boxes are production, I am not supposed to create / change anything on these boxes. I have to query those boxes remotly and find out whether SQL Server services are running or not.

Please note there would be some named instances too.

Find out whether SQL Server services are runing or not

Is that possible by querying the SQL Server , to find whether the listed

below there services are running or not?

1. SQL Server

2. SQL Agent

3. MSDTC

No. It is best to do this outside of the database. You can use NT command-line utilities or WMI to do this easily.|||So basically you can run some command-line tool in xp_cmdshell and parse its output, but that is rather lame. :)|||

Let me give some ideas on how WMI ot NT Utilities can be used.

For NT Utilities command are referring command "sc".

Basically I have 100 boxes running with SQL Servers. I want myself to be get alerted when any of the SQL Server instances services stopped.

As all these boxes are production, I am not supposed to create / change anything on these boxes. I have to query those boxes remotly and find out whether SQL Server services are running or not.

Please note there would be some named instances too.