Imagine you have a disk full alert.

Imagine now you have several tablespaces that have lots of free space, but when you try to shrink the tablespaces you find out many segments are placed in the last blocks of the file.

Maybe you would move the last segment of the file to another tablespace. Here is the sentence:

select file_id, owner, segment_name, segment_type 
from dba_extents 
where tablespace_name='&&TABLESPACE_NAME' 
  and  (FILE_ID,BLOCK_ID) = (select file_id, max(block_id) 
                             from dba_extents 
                             where tablespace_name='&TABLESPACE_NAME' 
                             group by file_id);

But if you want to move all segments to a new empty tablespace, in order to “defrag it”, then you just have to move segment by segment.

I use to launch SQL ad hoc with the sentence I want to propagate, just like this silly example:

select 'alter package '||owner||'.'||object_name||' compile;' sentencia
from dba_objects where status='INVALID' and object_type ='PACKAGE BODY';

Of course, I would rather use utlrp.sql instead, but, you get the point.

So, as each type of segment/object has its own syntax for the “move” operation, you have to adapt this scripting to the type of segment you want to move. Note that tables, indexes, lobs may be partitioned, and here again the syntax is also different.

You need a different syntax to move a non partitioned table, a table partition, a non partitioned index, an index partition, a non partitioned lob segment and a lob partition.

Over the years, I used a script to generate all those SQL sentences automatically and I found myself having to finish “manually” a failed tablespace defragmentation made with Oracle Enterprise Manager Cloud Control option for Tablespace Reorganization.

I remembered a movie where the owner of a factory is showing to his son how the factory works. An employee describes every manofacturing step with detail, like it’s something super complicated, but then the father explains “Don’t worry, since we automatized the factory, everything is done automatically” and the employee argues: “But what happens if the ‘automatic’ fails? who is the one here to switch the ‘automatic on’?“.

If you run an automatic process to defrag, with Enterprise Manager or any other tool, or if you have to do it manually because it fails, my script is on the Café Database GitHub here:

https://github.com/CafeDatabase/defrag_tablespace

Here you will find two scripts:

One is defragmenta_tablespace.sql, and you may set the values for ORIGIN, SOURCE and INITIAL size for segment in the define section, or accept the parameters in sqlplus execution: choose the three “define” code lines, or the three “accept” lines and comment the other three:

-- define tablespace_origen=ORIGIN
-- define tablespace_destino=SOURCE
-- define t_extension_inicial=1

accept tablespace_origen prompt 'Tablespace source: '
accept tablespace_destino prompt 'Tablespace target: '
accept t_extension_inicial default '1' prompt 'Size of initial segment extent [default 1M]:'

The script will show the SQL syntax to move three types of segments: tables, indexes and lob segments, with specific syntax for partitioned ones, and you may adapt the scripting as follows:

    • You may add or not the clause “update indexes” in the tables/partitions section. It will increase time, but it’s something mandatory if you have to run this script online.
    • You can also allow the execution of the “rebuild_unusable” procedure or leave it commented. It will make the tables movement faster and some of the indexes will be rebuild later in the index rebuild section. If the indexes are placed in the same tablespace and they will be moved too after the tables and table partitions, so it’s faster to keep it commented and rebuild them all later, considering no users are using the tables.
    • You can adjust or remove the clause for PARALLEL. If the environment is all yours and you have plenty of CPUs at your hand, PARALLEL will speed up the reorganization to another tablespace. If you know a good value for parallelism like 120 (assuming you have 120 CPUs for you) or something like that, here is the place to set it up.
    • Same as NOLOGGING, but you need to ensure that all the segments are back to LOGGING mode after you finish and then run a physical backup.

The other script is rebuild_unusable.sql and I’m sure you know what it exactly does.

Yes, it rebuilds all unusable indexes.

Here you may adapt the scripting as follows:

 

    • You may adjust or remove the clause PARALLEL. Leave the automatic paralellism or use a fixed value.
    • The rebuild online clause. If the environment is all yours, again, you don’t need to add the online clauses to make this rebuild locking small parts to make it usable for users. If you are alone in the database in a restricted session, you don’t need to use this clause.

And one more thing, just ensure that you create this procedure as a DBA user for not having any issues when querying objects from other schemas, and everything should be fine.

Share This