开发者

literal string works but variables take forever

I have a query that works when I have fixed values. ie:

select
    count(*)
from
    address a
where
    a.primary_name like upper('cambourne court') and
    a.secondary_name like upper('flat 9');

However replace the upper('flat 9') with a variable which is second_name:=upper('flat 9') and the search now returns all 111 addresses in 'cambourne court'.

Why would this be?

EDIT: This is the complete address.sql file (with comments removed)

declare
    address_details address%rowtype;
    current_loc varchar2(32);

    prime_name varchar2(255);
    prime_number varchar2(255);
    second_name varchar2(255);
    street_name varchar2(255);
    town_name varchar2(255);
    success boolean;

    the_count number;
begin

prime_name:=upper('&&primary_name');
prime_number:=upper('&&primary_number');
second_name:=upper('&&secondary_name');
street_name:=upper('&&street_name');
town_name:=upper('&&town_name');


success:=true;

-- error checking 开发者_Python百科here (removed for brevity)


if success then
    current_loc := 'finding address';
    select
        count(*)
    into
        the_count
    from
        dependency d,
        address a,
        street s
    where
        d.dep_obj_id1 = 2 and
        d.dep_obj_id2 = 1 and   
        a.loc_id = d.dep_id1 and
        s.loc_id = d.dep_id2 and 
        a.primary_name like prime_name and
        a.secondary_name like second_name and
        s.name like street_name and
        s.town like town_name;

end if;

dbms_output.put_line('success: address found '||the_count); 


exception 
    when too_many_rows then 
        dbms_output.put_line('failure: too many rows while '||current_loc); 
    when no_data_found then 
        dbms_output.put_line('failure: no rows found while '||current_loc); 
    when others then
        dbms_output.put_line('failure: general error while '||current_loc); 

end; 
/

Update: I restarted SQL*Plus which seemed to have fixed the break.

Replacing prime_name and second_name with the actual strings means the code runs in less than a second. With variables means it takes more than 2 minutes.


Your symptoms correspond to having a PL/SQL variable with the same name as a column in the table.

[Edit] feeling somewhat guilty with an upvote that wasn't the correct answer, so I tried to reproduce and don't get your results:

SQL> select * from address
  2  ;

PRIMARY_NAME               SECONDARY_NAME
------------------------------ ------------------------------
CAMBOURNE COURT            FLAT 9
CAMBOURNE COURT            FLAT 10

SQL> declare
  2  second_name varchar2(30) := upper('flat 9');
  3  x pls_integer;
  4  cursor c is
  5  select
  6      count(*)
  7  from address a
  8  where
  9      a.primary_name like upper('cambourne court') and
 10      a.secondary_name like upper('flat 9')
 11  ;
 12  begin
 13  select count(*) into x
 14   from address a
 15  where
 16      a.primary_name like upper('cambourne court') and
 17      a.secondary_name like upper('flat 9');
 18  dbms_output.put_line('literal: '||x);
 19  select count(*) into x
 20   from address a
 21  where
 22      a.primary_name like upper('cambourne court') and
 23      a.secondary_name like second_name;
 24  dbms_output.put_line('variable: '||x);
 25  end;
 26  /
literal: 1
variable: 1

PL/SQL procedure successfully completed.


The 111 records suggests second_name doesn't contain the value you expect; how are you capturing &&secondary_name, and can you check the value it actually has before and after your omitted validation section? From the results it seems to contain '%' rather than 'flat 9', but I assume you've already checked that.

The speed issue suggests the optimiser is changing behaviour in a way that's changing the join order and/or the indexes being used. By default that could be joining every street row with every every address record that has a Cambourne Court and only then doing the dependency checks, but it will vary quite a bit based on what indexes it thinks it can use and any stats that are available. The difference is that with the literals, even though you're using like there are no wildcards so it may know it can use an index on the primary_name and/or secondary_name; in the variable version it can't know that when the query is parsed so has to assume worse-case, which would be '%'. Which it may actually be getting if it's returning 111 addresses.

Without doing an explain plan it's hard to guess exactly what's going on, but you could try adding some optimiser hints to at least try and get the join order right, and even to use an index - though that should possibly not stay in place if you can ever have values starting with %. That might tell you what's being done differently.


The explain plan may be suggestive. After running it, find the sql_id from v$sql for that statemnet

select sql_text, sql_id from v$sql where lower(sql_text) like '%address%street%';

Then plug that into

select * from table(dbms_xplan.display_cursor('1mmy8g93um377'));

What you should see at the bottom is something like this, which would show whether there were any oddities in the plan (eg using a column in one of the tables, using a function...).

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("A"."LOC_ID"="D"."DEP_ID1" AND "S"."LOC_ID"="D"."DEP_ID2")
   4 - filter(("A"."PRIMARY_NAME" LIKE :B4 AND "A"."SECONDARY_NAME" LIKE 
              :B3))
   6 - filter(("S"."NAME" LIKE :B2 AND "S"."TOWN" LIKE :B1))
   7 - filter(("D"."DEP_OBJ_ID1"=2 AND "D"."DEP_OBJ_ID2"=1))


Alex has pointed the probable cause. Tables are indexed and using "like" with a variable is a case of index deactivation. Optimizers treat "like" expressions with constants that have no wildcards or placeholders as "=", so indexes if present are considered.

Drop your index on those columns and you'll get same bad performance with constants or variables. Actually don't do it, just autotrace and compare plans.

Regards,

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜