Steven Feuerstein wrote a great article on bulk collects, "On BULK COLLECT - Best practices for knowing your LIMIT and kicking %NOTFOUND", which can be found at
http://www.oracle.com/technetwork/issue-archive/2008/08-mar/o28plsql-095155.html
I recently used his suggestions to (re)build a pipelined table function that uses dynamic SQL. The resulting PL/SQL code can be found in the package body listed below. Of course the examples shown here just use dual (which hopefully will never change...) and emp/dept (ditto), but you get the point.
I thought this turned out to be quite elegant, so I share it here to serve as a template.
In my package I use two different approaches to avoid hardcoding dynamic SQL in the package - because if the SQL were hardcoded, you would never know the package generates invalid SQL:
- A view, with input parameters that refer to the view. The package becomes invalid whenever the view (or anything it references) changes or becomes invalid.
Note: If the dynamic SQL uses view columns that are not defined as input parameters of the function, you need to define variables in the function that refer to those columns. This ensures a dependency between the package and the view. In the package below variable v_dummy_dname is used to illustrate this.
- A cursor, which is a more elegant solution (no need for a view - "the less objects the better") and (according to Tom Kyte) does the same "bind variable trick" as the dynamic SQL. The package becomes invalid whenever the cursor (or anything it references) changes or becomes invalid.
No need for variable v_dummy_dname here, everything is included in the cursor itself.
The cursor does complicate the "order by", and you lose the ascending/descending option, but that's probably a minor issue in most cases.
Tom Kyte has a few nice pages on these topics:
here,
here and
here.