How to activate an index on the DB level

Some indexes exist only created in ABAP dictionary. But in some situations, if specifically indicated by SAP, these can be also activated on the database level. For example this would be one scenario, described in the note 2489084.

What are the necessary steps to activate an index at DB level?

Go to SE11 enter the name of the table of which the index belongs -> Display

Then on Indexes button -> pick the corresponding index

And you will see that the index is marked as a “no database index” and under it states “Index does not exist in database system ORACLE”

Go to change and select instead “Index on all database systems”

A transport request will have to be created for this change that will record the modifications made.

After this option is saved, the index will become inactive, go to the same SE11 transaction and activate it

Then go to SE14, enter the name of the corresponding table -> Edit -> Indexes -> Select the wanted index

You will also see here the message “Does not exist in the database”. At the processing type, select Background, as this will be a time consuming action and then the button Create database index.

You can monitor the progress of this action in SM37:

After the job is completed you can see that the index exists now also created on the DB side:

In order to pass the modification into the next systems go to SE09, release the transport and then import the TR in the target systems. No other action will be needed on the target system.

The following entries in table ‘TAORA’ do not represent existing DB containers: [TABART, DBSPACE]

During SUM upgrade in the extraction phase PREP_INIT/INIT_CNTRANS_PRE the following error was encountered:

The following entries in table ‘TAORA’ do not represent existing DB containers: [TABART, DBSPACE]

DODS PSAPDATODS

The following entries in table ‘IAORA’ do not represent existing DB containers: [TABART, DBSPACE]

DODS PSAPDATODS

 

 

 

 

 

 

 

 

 

 

From the description of the error we can draw the conclusion that the TABART DODS is assingned to the tablespace PSAPDATODS, that does not exist.
I checked at database level and all tablespaces were present, including PSAPDATODS:

sqlplus / as sysdba
select TABLESPACE_NAME from USER_TABLESPACES;
TABLESPACE_NAME
------------------------------
PSAPDAT
PSAPDAT740
PSAPDATFACT
PSAPDATODS
PSAPDATUSR
PSAPTEMP
PSAPUNDO
SYSAUX
SYSTEM

Also in SAP all of them could be visible in table TSORA:

 

 

 

 

This sounds very strange, so let’s also verify the SUM logs to check exactly what it checks:
The tool runs 3 scripts (that can be found in /usr/sap/SID/SUM/abap/control/dbs/ORA): SELTAIA.ORA, TAIORA.SAP that list the correspondents between TABART and tablespaces and SELDBS.ORA that lists the name of available tablespaces. The output of the first 2 scripts seems ok but SELDBS.LOG, the log from SELDBS.ORA does not display all the tablespaces available in the database:

># cat SELDBS.LOG

tp_exec_statement: select TABLESPACE_NAME from USER_TABLESPACES
DBS>>> PSAPDAT
DBS>>> PSAPDAT740
DBS>>> PSAPDATFACT
DBS>>> PSAPDATUSR

tp_exec_dbscript: 1 statement(s) successfully processed.

The reason why the output is different, is because SUM executes the sql statement under user schema owner, which lists a different output because in this case the database specific preparations mentioned in the SUM upgrade guide haven’t been checked.

The resource quotas of Oracle user SAP<SCHEMA-ID> or SAPSR3 have to be checked with the statement:

select * from dba_sys_privs where grantee = 'SAPSR3/SAP<SCHEMA-ID>';

As this does not exist, it has to be granted:

grant unlimited tablespace to sapsr3/SAP<SCHEMA-ID>;

After the grant was given the error was solved, SUM tool could see all the available tablespaces and jumped to the next step.

 

 

Activate SAP* user in a SAP Java stack system

If you are in the situation that you do not remember the password of any SAP user on a Java stack system you can activate the SAP* user. These are the steps that need to be followed:
The activation is done from configtool, that needs to switched to configuration editor mode:

And go to Configurations -> Cluster_config -> custom_global -> cfg -> services -> com.sap.security.core.ume.service -> Propertysheet properties

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Hit the change button, as in the warning message, the system needs to be restarted while SAP* user is activated:

 

 

 

 

And here there are two configurations that we are interested in:

ume.superadmin.password – By default it is set to FALSE. In order to activate SAP* change it to TRUE
ume.superadmin.activated – If you do not remember the password of SAP* you can enter here a new one

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After the restart of the Java system you can login with SAP* user and the password set in the previous step.

Be aware that during the time SAP* is activated the login of the other users is deactivated. So after you unlock, change the password of your user, you will have to deactivate SAP* user back and of course restart the system again, so the changes would take effect.

How to export only some rows of a table using oracle data pump option

Oracle data pump (expdp/impdp) is a very known utility for exporting data from tables. But the utility has a variety of options available and it can also be used not only to export the entire table data but also to export restricted rows from a table, that are defined using a where clause for example.
Let’s say that we want to export the rows of table HRP001 that meets the following conditions:

OTYPE = Q
RSIGN = B
RELAT = 032
SCLAS = P

we create a parameter file tab_where.par that has the content:

tables=HRP1001 
#table name to be exported from
directory=tmp_dmpdirs
#oracle directory where the export file will be located
DUMPFILE=exp_HRP1001_part.dmp
#file name that contains the data exported
logfile=exp_HRP1001_part.log
#log file of the 
query=HRP1001:"where OTYPE='Q' and RSIGN='B' and RELAT='032' and SCLAS = 'P'"
#the query that defines the data required from the table
COMPRESSION=all
#compression option for the dump file

Then we execute the expdp command that calls the parameter file that we just defined:

expdp user/passwd parfile=tab_where.par

Export: Release 12.1.0.2.0 – Production on Tue Nov 14 15:43:51 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

UDE-28002: operation generated ORACLE error 28002
ORA-28002: the password will expire within 7 days

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: user/******** parfile=tab_where.par
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.285 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “HRP1001” 14.47 MB 584555 rows
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/oracle/exp_tbls/exp_HRP1001_part.dmp
Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at Tue Nov 14 15:44:07 2017 elapsed 0 00:00:16

To import the data from this table we use the impdp utility:

impdp user/passwd parfile=tab_where.par

SAP Bundle patch bug in 12.1.0.2.180417 – 201805 => ORA-03171: Recovery was terminated during query execution

My experience with the latest SAP Bundle Patches wasn’t very good. In a previous post I mentioned this problem: http://sapnotebook.com/lob-segment-growing-large-bug-sap-bundle-patch-12-1-0-2-170418-201705/.
And now there is another bug that I experienced: 24447296 – ORA-3171 in a Non-ADG Database (Doc ID 24447296.8).
This might also be happening to you if you installed one of these patches: SBP 201711, 201708 or 201802 for Oracle 12.1.0.2 and you have a UNIX environment.
The bug manifests right after installing one of these patches by filling the system with short dumps of the type ‘DBIF_RTAB_SQL_ERROR’ with short text SQL error 3171 when accessing table “VBDATA”.
And when analyzing the developer traces for the corresponding work processes, you will get something similar to this:

———————————————-
C Wed Apr 11 08:43:46 2018
C check2: OCIStmtExecute() failed with -1=OCI_ERROR
C SQL error 3171:
C ORA-03171: Recovery was terminated during query execution
C *** ERROR => Error 3171 in stmt_execute() from oci_execute_stmt(), orpc=0
[dbsloci.c 17449]
C {root-id=0050568F7B7D1ED88FAA2DC3B5D6D34C}_{conn-id=00000000000000000000000000000000}_0
C *** ERROR => ORA-03171 occurred at SQL stmt (parse error offset=0)
[dbsloci.c 17450]
C {root-id=0050568F7B7D1ED88FAA2DC3B5D6D34C}_{conn-id=00000000000000000000000000000000}_0
C Dump statement cache (after SQL error):
———————————————-

VBDATA is not the only table for which the dumps can appear, also the tables TST03, SMW3_BDOC2, BC_MSG, SXMSCLUR, IUUC_RL_DATA, SXMXCLUP, SOFFCONT1, /WATP/TMOBMRIA can be affected.
The error does not occur in all the systems that have these patches installed, this is most likely to happen in the systems where the above tables are very large and get accessed very often.
The tricky part is that the bug can manifest directly into production system, even if the patch was previously applied to DEV and QAS and no side effects appeared, because in production the load on the tables can be significantly higher.
After some time the error has been also documented by SAP in two notes, which I recommend to be read very careful and also to be checked from time to time for updates:

2633327 – ORA-03171 while accessing table in a Non Active Data Guard database (non-ADG)
2668860 – ORA-00600 [3020] with block type NGLOB: “Hash Bucket”

In the first versions of the note 2633327, it was stated that the bug would get fixed by applying some Oracle merge patch or by installing a newer bundle patch 201805.
But a newer update of the note states that you should definitely not install those mentioned patches, as you could get into even more serious problems, that can lead to an unrecoverable database!
At the moment there is not fix available and an update of the note is still waited to come.
So I would say think twice before installing any new bundle patch from Oracle as these can have a negative impact on your business jobs and can seriously affect the daily operation.

*** Update: On 15.08.2018 SAP updated the note 2633327 and several fixes are available depending your SBP installed. Read it carefully an apply the fix that matches your case.

SAP Memory Analyzer – MAT – installation and usage

Memory Analyzer is a tool developed in Eclipse that is used to find the cause for the Heap Dump generation in the SAP Java systems. There are several sources from SAP that I will list at the end of the article, but unfortunately this information if a little bit outdated, that is why I would like to clarify here what are the steps that need to be followed in order to install this tool and how can the dump can be analyzed with it.

These are the steps for the installation:

1. The download location is: http://www.eclipse.org/mat/downloads.php
If you have a windows environment you have to download the Windows (x86) kit to your local computer => MemoryAnalyzer-1.7.0.20170613-win32.win32.x86.zip

2. Download the extension pack from the note 1883568 – How to self analyze a Heap Dump using MAT

MemoryAnalyzer-extensions-update-site.zip

3. Unzip MemoryAnalyzer-1.7.0.20170613-win32.win32.x86.zip to your computer and run the MemoryAnalyzer.exe from the mat directory:

4. Install the extension downloaded earlier as follows:

Add the path to extension archive:

And the installation is completed, only a restart of the computer is necessary. That’s it, now the tool is ready to use.

Now, before opening the dump file you have to make sure that you have enough RAM memory available on the machine that you run the heap dump analysis, that is more than the size of the dump. It is possible that for larger dumps, you get this error when opening the file for analysis:

In order to solve this you have to change -Xmx parameter to a higher value from the tool file MemoryAnalyzer.ini located in the mat directory. The value has to be set to at least the size of the dump file:

Now just open the dump file:

The most used option is Leak Suspects Report, this analysis your dump, searches for the memory leak and lists what are the applications/libraries that are causing the problem, split in percentages.

leak suspects

leak suspects

Under each Problem Suspect you should find some keywords, that you can search for on SMP, SCN or if the suspect is a custom code or a third party application you can get in contact with the respective responsible to get to the root cause.
For more information about heap/thread dumps you can find in my post: SAP java heap and thread dumps

Useful related SAP notes and cocumenation:

2063943 – Memory Analyzer Tool stops the heap dump parsing due to insufficient memory

1883568 – How to self analyze a Heap Dump using MAT

https://wiki.scn.sap.com/wiki/pages/viewpage.action?pageId=237307510

 

SAP Java heap and thread dumps

Definition of thread and heap dump

A thread dump is a snapshot of how the Java threads of the running server processes look at a point in time, whereas a heap dump is a snapshot of the memory of a java process.

Parameters

Automatically, by default, if there is an out of memory situation, the system itself will generate a heap dump. This behavior is assured by the java parameter XX:+HeapDumpOnOutOfMemoryError that you can find in configtool setup:

In case this parameter is deleted, the automatic generation of the dump will not happen. The file will have quite a big size, approximately the size of the java heap size.

Now, there is also a similar parameter -XX:+HeapDumpOnCtrlBreak that it is not set as a default, when this is activated a heap dump will be written together with the thread dump, every time a CTRL_BREAK (win) or SIGQUIT (unix) event is triggered manually or automatically by the system. So if such behavior is not intended, this parameter should be deleted, not to have unnecessary amount of heap dumps generated.

Naming conventions and paths

You can find the dumps in the default location: /usr/sap/SID/instance/j2ee/cluster/serverN. This location is also defined by a parameter in configtool -XX:HeapDumpPath=directory where to save the heap dumps and can be changed, in case for example, you do not have enough space in the default location and you want the dump to be generated somewhere else.
But this parameter path is valid only for dumps that are generated automatically by the parameter -XX:+HeapDumpOnOutOfMemoryError, not also by the parameter HeapDumpOnCtrlBreak.

Heap dumps generated automatically by OutOfmemory errors will have the naming: OOM.hprof and OOM.addons
Heap dumps generated by a user will look like java_pid.hprof and java_pid.addons.
On both situations, these can be found in /usr/sap/SID/instance/j2ee/cluster/serverN.
Standalone thread dumps will have the naming std_serverX.out and can be found in /usr/sap/SID/instance/work/

Generation methods

As of Java 7.3 it is very easy to manually generate a dump, if needed (for example in case of poor performance of the application). Both thread and heap dumps can be generated directly from NWA -> Troubleshooting -> Advanced Troubleshooting -> Thread Dump Analysis or Heap Dump Analysis

Another way to this, but a little bit more complex is from JSMON tool. You can list all the options available like this:

jsmon pf=instance_profile
help => to see all the options available
process sdump idx => to generate a thread dump of a process
or
process hsdump idx => to generate a heap dump of a process

A thread dump can be generated also outside of the system, from Solution Manger. This is how it looks like for Solman 7.1 SP14:

Go to SAP Solution Manager Work Center -> Root Cause Analysis -> System Analysis -> Choose the system for which you want to generated the thread dump -> Expert Links -> Start Embedded -> Thread Dump Analysis -> Trigger Thread Dump for the wanted processes:

Now that we talked about the heap dumps, you can find in my next post how to analyze this memory related dumps and how you can get an idea of what is causing the memory problems: SAP Memory Analyzer – MAT – installation and usage

Useful related SAP documentation:
1004255 – How to create a full HPROF heap dump of J2EE Engine 6.40/7.0
https://wiki.scn.sap.com/wiki/pages/viewpage.action?pageId=237307510
https://wiki.scn.sap.com/wiki/display/ASJAVA/%28JSTTSG%29%28Deploy%29Thread+Dump

How to export/import Java in dual stack systems

In this article I will cover the subject of doing a export/import of Java part of a dual stack system, task that mostly used during a system copy. I would say that is is a topic that is not documented very in detail by SAP, so I hope it will come very handy.
First of all, you have to know that for exporting the Java data, no downtime is required, this is an online task and it is done with the help of SWPM tool.

Preparations

On the target system, make a backup of the below files, as most of them will be overwritten during the import:

1. SAP profiles
2. sidadm environment profiles
3. database profiles $ORACLE_HOME/dbs and if Oracle 12c also /oracle/SID/sapprof
4. listener files from $ORACLE_HOME/network/admin

After all these files are backed up proceed with the following:

1. prepare the location for the source system Java export, no more then 5 Gb should be required
2. Download SWPM tool and extract it
3. Prepare a temporary location for the SWPM logs

export TMP=/usr/sap/SID/temp_sapinst
export TEMP=/usr/sap/SID/temp_sapinst
export TMPDIR=/usr/sap/SID/temp_sapinst
export JAVA_HOME=/usr/sap/SID/SYS/exe/jvm/sunx86_64/sapjvm_4.1.035/sapjvm_4

Export

As root run sapinst on the source system and start the export:

./sapinst SAPINST_USE_HOSTNAME=host_name SAPINST_SLP_MODE=false

Choose the export option:

Give the path to the SAP profiles:

Instances installed will be detected:

Database data is identified:

The database copy was done in this case with database tools, so this option can be selected:

Enter the location where the export will be generated:

If you want to remove the sapinst group that was added to the users:

The summary of the selections made:

Reaching the end of the export:

Import

After the system copy is completed with the help of database tools and also the postprocessing of the ABAP part was done, Java can be imported on the target system.
Prepare the environment parameters and start sapinst with root:

export TMP=/usr/sap/SID/temp_sapinst
export TEMP=/usr/sap/SID/temp_sapinst
export TMPDIR=/usr/sap/SID/temp_sapinst
export JAVA_HOME=/usr/sap/SID/SYS/exe/jvm/sunx86_64/sapjvm_4.1.035/sapjvm_4

Also some archives need to be downloaded from support portal beforehand, kits that will be requested during the import procedure: Kernel, Oracle client, Java component

./sapinst SAPINST_USE_HOSTNAME=host_name SAPINST_SLP_MODE=false

Choose the correct option:

In order to be able to customize some options, select custom:

Download the correct kernel archive for your system and select it here:

Enter the system parameters:

DNS name:

Enter the password for the users that will be created:

Select homogeneous system copy, as the system copy was already done via backup/restore with database tools:

Select database data:

Database version:

If you have Oracle 12c, hopefully installed your software with user oracle, so choose it:

Some more database related parameters:

Enter Oracle passwords:

Listener information:

Enter the path to the Oracle instant client software, that you have to download:

The database is already recovered and running, so choose the appropriate option. OPS$ users can be deleted, as SSFS is implemented:

Key phrase for secure store:

Enter the path to the Java component archive:

You can choose the system numbers for your instances:

Enter message server internal port for SCS:

J2EE_ADMIN password:

SDM password:

No need to install the webdispatcher:

Exit option that does not need to be selected:

DDIC password:

Archives will be unpacked:

If you want to remove the groups sapinst from the users:

Summary of your selections:

Import is now in progress:

Probably that during the import, you will face the following errors that are generated because the database configuration that already exist,s will be overwritten with a default one.
Copy back the old files that you saved over:

Listener files:

Oracle pfile and spfile:

After replacing the files with the old ones, the import should continue with success.

SAP One Support Launchpad useful paths for SAP Basis Admin

As I am sure you noticed, lately the Support Portal from SAP or newer the Launchpad has changed so many times. Probably for many of you it is still not clear where to find stuff you need, so I thought it would be a good idea to share the location for most important areas a Basis admin would need. This page will be changed with new items regularly, so keep an eye on it from time to time ;).
Sometimes the Portal might experience problems and might not be available, other times there are some planned activities that require downtime and the portal cannot be accessed. There is one link where you can check the service availability of the site, so you can access this link if you experience any problems, or more convenient, you can also subscribe to this page so you are notified:

https://status.support.sap.com/

If you want to find any information related to a product starting from KBAs, blog articles, forum questions, guides, archives to use for install/upgrade you can do a search for it here:

https://launchpad.support.sap.com/#/productsearch

Alternatively, you can also add this tile, as a shortcut on your launchpad page, click on the Personalize button on your Launchpad page:

Launchpad personalize

Search on My products:

And add the tile on your home page:

View/request migration keys

Direct link: https://launchpad.support.sap.com/#/migration/keys/all
Path: One Support Launchpad -> System Operations and Maintenance -> Migration Keys

SUM Tool

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> By Category -> Additional components -> UPGRADE TOOLS -> SL TOOLSET -> SL TOOLSET 1.0 -> ENTRY BY COMPONENT -> SOFTWARE UPDATE MANAGER (SUM) -> SOFTWARE UPDATE MANAGER 1.0 or SOFTWARE UPDATE MANAGER 2.0 -> SUPPORT PACKAGES AND PATCHES -> Choose your package depending on your platform

SWPM

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> By Category -> Additional components -> UPGRADE TOOLS -> SL TOOLSET -> SL TOOLSET 1.0 -> ENTRY BY COMPONENT –
> SOFTWARE PROVISIONING MANAGER -> SOFTWARE PROVISIONING MGR 1.0 -> Choose the required SWPM tool

SPAM/SAINT package

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> By Category -> Additional components -> UPGRADE TOOLS -> SL TOOLSET -> SL TOOLSET 1.0 -> Choose your needed packages depending on your SAP release

LMSERVICE

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> SAP TECHNOLOGY COMPONENTS- >SAP SOLUTION MANAGER -> SAP SOLUTION MANAGER 7.1 -> ENTRY BY COMPONENT -> SOLUTION MANAGER JAVA STACK -> SOLMANDIAG 710

Generate developer keys

Direct link: https://launchpad.support.sap.com/#/sscr/developers/my
One Support Launchpad -> Software Change Registration On premise -> Developers, Objects

SAP Bundle Patch

Path: One Support Launchpad -> Software Downloads -> Databases -> Database and Database Patches -> Oracle -> Oracle Patch and Software Downloads -> select your Oracle release -> select your OS type from the drop down list

IGS

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> SAP Frontend Components -> SAP IGS -> select your release and here you also have SAP IGS HELPER APPLICATIONS

Guide Finder

Path: https://help.sap.com/viewer/index => at search all box choose product hierarchy and choose the product for which you would like to have the guide
(For Basis probably the most useful will be the category: SAP NetWeaver)
One example of direct link for Netweaver 740 => https://help.sap.com/viewer/p/SAP_NETWEAVER_740

SAP Cryptographic library

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> By Category =>
SAP CRYPTOGRAPHIC SOFTWARE => SAPCRYPTOLIB

Kernel

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> By Category => MISCELLANEOUS COMPONENTS => SAP KERNEL => Choose the needed kernel by Unicode, version, OS

B2B components for PI

Path: One Support Launchpad -> Software Downloads -> SUPPORT PACKAGES & PATCHES -> By Category => SAP NETWEAVER AND COMPLEMENTARY PRODUCTS => PI B2B ADDON 1.0 => REQUIRED COMPONENTS OF OTHER PRODUCT VERSION => JAVA COMPONENTS REQUIRES => PI SFTP PGP ADDON 1.0 – JAVA COMPONENTS => choose PGP MODULE 1.0 (for PIB2BPGP) and
SFTP ADAPTER 1.0 for PIB2BSFTP

LOB segment growing very large bug in SAP Bundle Patch 12.1.0.2.170418 – 201705

It looks like the SAP Bundle Patch 12.1.0.2.170418 – 201705 contains some bugs. At least for me, this was noticed in a Solaris Intel environment on a Solution Manger 7.1 and a PI 7.4 system.
After a week or so since the installation, the database was growing unusually fast. Checking the top growth segments in the Solman database, the object responsible for this was the LOB segment corresponding to the column LAST_TEST in table MES_DB_AGGREGATE. In the productive system this was growing with more than 20GB/per day. But let’s see in case you also encounter such rapid growth of the database how can you track what is causing this and since when?

As a first step I would propose to check with the help of tcode ST04 or DBACOCKPIT the top growth of segments via: Space -> Segments -> Overview. Here you can check the top growing segments per day/week/month:

Also here at the tab Top sizes you can check the objects that occupy the most space.

To see how did the growth of a particular segment developed lately, you can check its history growth by going to Space -> Segments -> Detailed Analysis -> Fill in the segment name -> OK

Here you can have a look on how this segment grew on days/weeks/months:

The quickest workaround was to reorganize the responsible segment, in order not to get an overflow of the database.

The reason for this growth was caused by the abnormal behavior of the Oracle SMCO process that makes some LOB segments to extend indefinitely, bug that comes together with the instalation of the bundle patch. To summarize in a few words SMCO (Space Management Coordinator) performs proactive space allocation and space reclamation and coordinates the following space management tasks: tablespace-level space pre-allocation, securefile lob segment pre-extension, temporary segment space reclamation, securefile lob segment space reclamation. There is a parameter that enables/disables the SMCO background process: _ENABLE_SPACE_PREALLOCATION. By default this feature is turned on, so the value is by default set to 3, that means that space preallocation is active for all auto extensible datafiles and securefile segments. Another possible value is 1 and this means that pace preallocation is done only for the datafiles. If you want to quickly get rid of this problem, a temporary solution would be to completely disable this feature and set the parameter to 0:

ALTER SYSTEM SET “_ENABLE_SPACE_PREALLOCATION” = 0;

The good news is that the parameter is dynamic, so no restart is required in order for this to get activated.

A strange behavior of LOB segments growing too large I experienced also on a PI system 7.4 for the LOB segments corresponding to tables BC_MSG and SXMSCLUP.

For more information about LOB segments, you can check out my other blog post: Introduction to LOBs

*** Update:  SAP documented in the meantime this behavior of LOB segments in the OSS note: 2538588 – 12c: Extensive LOB segment growth after 12.1.0.2 SBP 201705 or 201708 has been applied and also the fix of the bug can be found now in the newest SAP bundle patch SBP 201711.