Foresight Analytics Platform 10.9 and earlier uses the LO data type to store binary data in PostgreSQL DBMS. Starting from the version 10.10, new repositories use the BYTEA data type by default as a replacement of the LO data type.
Repositories that were created in earlier versions of Foresight Analytics Platform and that use the LO data type are supported without restrictions. It required, one can set up the repository to work with the BYTEA data type instead of LO by converting system and custom tables, that is, by replacing table fields with the LO data type with the fields with the BYTEA data type. System tables are converted automatically during repository version update by means of the repository manager; custom tables should be converted manually.
NOTE. Simultaneous use of the LO and BYTEA data type within the same repository is not available.
Before converting system and custom tables create a backup copy of the target repository and database.
To convert system and custom tables with fields with the LO data type to the tables with the fields with the BYTEA data type, follow the steps below.
When the BYTEA data type is used, size of the object linked to the table must not exceed 1 GB. Tables containing fields with the LO data type can be linked to objects, which size exceeds 1 GB. To search for such objects, create a unit with the BLOB_FINDER identifier in the repository.
NOTE. The unit is used as an example. If required, use other object search methods.
To execute the unit, set the variables:
DATABASE_OBJECT_ID. Identifier of the database in the repository. The default identifier: METABASE_DB.
SCHEMA_NAME. Schema identifier. The default identifier: public.
NOTE. Schema identifiers specified in the Schema parameter on the Database Properties page and in the SCHEMA_NAME variable must match.
Add links to the Dal, Db, Metabase, and IO system variables.
Const
DATABASE_OBJECT_ID = "METABASE_DB"; // Database identifier of PostgreSQL DBMS
SCHEMA_NAME = "'public'"; // Schema identifier used in SQL query SQL_GET_LO_TABLES
// Get the list of tables with containing fields with the LO data type by means of the query
SQL_GET_LO_TABLES = "SELECT a.table_name, a.column_name FROM " +
"information_schema.columns a, " +
"information_schema.tables b " +
"WHERE a.table_schema = " + SCHEMA_NAME + " AND " +
"a.table_name = b.table_name AND " +
"a.table_schema = b.table_schema AND " +
"a.udt_name='lo' AND " +
"b.table_type='BASE TABLE' AND " +
"(a.table_name NOT LIKE 'b_%' OR " +
"a.table_name = 'b_mod')";
Var
DB: IDatabaseInstance;
Com, Com_loId: IDalCommand;
Cur, Cur_loId: IDalCursor;
CurFields: IDalCursorFields;
NativeNameStr: String;
FieldName: String;
lo_size: Double;
StreamData: IIOStream;
Mb: IMetabase;
MbObjDescs: IMetabaseObjectDescriptors;
MbObjDesc: IMetabaseObjectDescriptor;
obj_key: Integer;
obj_id: String;
ObjCounter: Integer = 0;
ObjList: String = "";
// Check link between relation in schema with repository object
Public Function MatchRelationWithMetabaseObject(Desc: IMetabaseObjectDescriptor; SearchCube:boolean):IMetabaseObjectDescriptor;
Var
RDSParent: IMetabaseObjectDescriptor;
MbObjDescs: IMetabaseObjectDescriptors;
Begin
If Desc <> Null Then
Desc := Desc.Parent;
RDSParent := Desc;
If Not SearchCube Then
Debug.WriteLine("");
Debug.WriteLine("ATTENTION: The " + RDSParent.Id + " cube contains cache, which size is exceeds 1 GB, it is required to delete cache data");
ObjList := ObjList + " " + RDSParent.Id;
Debug.WriteLine("");
End If;
If SearchCube Then
MbObjDescs := RDSParent.Dependents;
For Each RDSParent In MbObjDescs Do
If RDSParent.ClassId=metabaseobjectclass.KE_CLASS_STDCUBE Then
Debug.WriteLine("");
Debug.WriteLine("ATTENTION: The " + RDSParent.Id + " - cube contains attachments, which size exceeds 1 GB, it is required to unlink such attachments from the cube");
Debug.WriteLine("");
ObjList := ObjList + " " + RDSParent.Id;
End If;
End For;
End If;
Else
Debug.WriteLine("");
Debug.WriteLine("For the " + NativeNameStr + " relation could not set correspondence with repository object");
Debug.WriteLine("It is required to delete this object by means of DBMS");
Debug.WriteLine("");
ObjList := ObjList + " " + NativeNameStr;
End If;
Return RDSParent;
End Function MatchRelationWithMetabaseObject;
// Initialize execution of Dal command depending on the type of checked object
Sub ExecDalCom(ObjectType: String);
Begin
Com_loId := DB.Connection.CreateCommand("");
// Create an SQL query depending on the type of objects that should be checked
Select Case ObjectType
// Sample data from the b_mod table to check document size
Case "Documents":
Com_loId.SQL := "SELECT bin, obj FROM b_mod ORDER BY obj DESC";
Else
// Take into account quotation marks because in DBMS name can be identifier and it should be sent in the query enclosed in double quotation marks
Com_loId.SQL := "SELECT " + """" + FieldName + """" + " FROM " + """" + NativeNameStr + """";
End Select;
Com_loId.Execute;
Com_loId.Parse;
Cur_loId := Com_loId.CreateCursor;
While Not Cur_loId.Eof Do
If Cur_loId.Fields.Item(0).Value <> 0 Then
Select Case ObjectType
Case "Documents":
obj_key := Cur_loId.Fields.Item(1).Value;
End Select;
StreamData := IOStream.FromVariant(Cur_loId.Fields.Item(0).Value);
lo_size := StreamData.Size;
If lo_size > 1073741824 Then
Select Case ObjectType
// Search for cubes, which contain tables with attachments
Case "Attachments":
Debug.WriteLine("");
Debug.WriteLine("The " + NativeNameStr + " relation contains LO object, which size is " + lo_size.ToString + " bytes (exceeds 1 GB)");
Debug.WriteLIne("Attempting to set link with repository object…");
ObjCounter := ObjCounter+1;
obj_id := string.Replace(NativeNameStr,"t","T_OBJ");
MbObjDesc := mb.ItemByIdNameSpace(obj_id, integer.Parse(obj_id.Replace(obj_id, "T_OBJ", "")));
MbObjDesc := MatchRelationWithMetabaseObject(MbObjDesc, True);
// Search for cubes, in which caching is used
Case "CubeCacheTables":
Debug.WriteLine("");
Debug.WriteLine("The " + NativeNameStr + " relation contains LO object, which size is " + lo_size.ToString + " bytes (exceeds 1 GB)");
Debug.WriteLine("Attempting to set link with repository object…");
ObjCounter := ObjCounter+1;
obj_id := string.Replace(string.ToUpper(NativeNameStr), "_CB_COLS_TAB","");
obj_key := integer.Parse(string.Replace(obj_id, "OBJ", ""));
MbObjDesc := mb.Item(obj_key);
MbObjDesc := MatchRelationWithMetabaseObject(MbObjDesc, False);
// Search for documents
Case "Documents":
obj_id := MB.Item(obj_key).Id;
Debug.WriteLine("");
Debug.WriteLine("ATTENTION: the repository document " + obj_id + " contains the saved file, which size exceeds 1 GB, it is required to delete this object");
Debug.WriteLine("");
ObjCounter := ObjCounter+1;
ObjList:= ObjList + " " + obj_id;
Case "OtherTables":
Debug.WriteLine("");
Debug.WriteLine("ATTENTION: the " + NativeNameStr + " relation in the schema " + "contains LO object, which size exceeds 1 GB" );
Debug.WriteLine("It is required to delete it by means of DBMS");
Debug.WriteLine("");
ObjCounter := ObjCounter+1;
ObjList:= ObjList + " " + NativeNameStr;
End Select;
End If;
End If;
Cur_loId.Next;
End While;
Cur_loId.Close;
Com_loId.Close;
End Sub ExecDalCom;
Sub Main;
Begin
Mb := MetabaseClass.Active;
DB := MB.ItemById(DATABASE_OBJECT_ID).Open(Null) As IDatabaseInstance;
// If database object is in the container, use the following method for getting object descriptor:
// DB:= MB.ItemByIdNamespace(DATABASE_OBJECT_ID, <container object key>).Open(Null) As IDatabaseInstance;
Com := DB.Connection.CreateCommand("");
Com.SQL := SQL_GET_LO_TABLES;
Com.Execute;
Com.Parse;
Cur := Com.CreateCursor;
Debug.WriteLine("SQL query execution. In some cases execution can take several minutes…");
Debug.WriteLine("");
Debug.WriteLine("List of tables with LO fields in the current schema:");
Debug.WriteLine("-----------------------------------------:");
While Not Cur.Eof Do
CurFields := Cur.Fields;
NativeNameStr := CurFields.Item(0).Value;
FieldName := CurFields.Item(1).Value;
Debug.WriteLine(NativeNameStr + " | " + FieldName);
If FieldName = "attachment" Then
ExecDalCom("Attachments");
Else
If string.Find(NativeNameStr, ("_cb_cols_tab"), 1) > 0 Then
ExecDalCom("CubeCacheTables");
Else
If NativeNameStr = "b_mod" Then
ExecDalCom("Documents");
Else
ExecDalCom("OtherTables");
End If;
End If;
End If;
Cur.Next;
End While;
Debug.WriteLine("");
Debug.WriteLine("Checking of LO objects in the schema is complete");
Debug.WriteLine("------------------------------------");
If ObjCounter = 0 Then
Debug.WriteLine("Result:");
Debug.WriteLine("Repository can be updated to bytea version");
Else
Debug.WriteLine("Result:");
Debug.WriteLine("Large objects (lo) are found, which size exceeds the allowed for the BYTEA data type");
Debug.WriteLine("Number of objects: " + ObjCounter.ToString);
Debug.WriteLine("List of objects: " + ObjList);
Debug.WriteLine("It is required to delete these schema objects!");
End If;
Debug.WriteLine("");
Cur.Close;
Com.Close;
End Sub Main;
After executing the unit the console displays the list of tables with linked objects, which size exceeds 1 GB. Checked objects:
Table included in the cube, in which data caching is used. If this table is found during the check, deselect the Cacheable Object checkbox in cube properties and delete the tale in the object navigator or by means of DBMS.
The table included in the MDM dictionary for storing attachments. If this table is found during the check, find the objects, which size exceeds 1 GB in the MDM dictionary using the FILE_SIZE attribute. Next, open the report, to which the found objects are attached, for edit and remove the attachments in the Attachments List dialog box.
The table containing fields with data types : Custom or Binary. If after the check this table is found, delete it by means of DBMS.
Document. If after the check this object is found, delete it in the object navigator.
If the objects, which size exceeds 1 GB, are not found, the console displays the message:
Result:
The repository can be updated to BYTEA version
In this case additional operations are not required, proceed to the next step.
To convert custom tables, execute the script as the postgres user or the DBMS administrator with the SUPERUSER privilege:
begin;
do $$
declare
r record;
col_name varchar;
begin
for r in select a.table_name, a.column_name from
information_schema.columns a,
information_schema.tables b
where a.table_schema = 'public' and -- if required, specify custom schema instead of the 'public' schema
a.table_name = b.table_name and
a.table_schema = b.table_schema and
a.udt_name='lo' and
b.table_type='BASE TABLE' and a.table_name not like 'b_%'
loop
col_name = r.column_name || '_tmp';
execute format('alter table %I add column %I bytea', r.table_name, col_name);
execute format('update %I as a set %I = lo_get(b.oid) from pg_largeobject_metadata b where a.%I = b.oid', r.table_name, col_name, r.column_name);
execute format('select lo_unlink(oid) from pg_largeobject_metadata a, %I b where b.%I = a.oid', r.table_name, r.column_name);
execute format('alter table %I drop column %I', r.table_name, r.column_name);
execute format('alter table %I rename column %I to %I', r.table_name, col_name, r.column_name);
end loop;
end $$;
commit;
In case of errors caused by table conversion, for example, if a view is created based on the specified tale, reset changes and recreate the view.
If the script is executed without errors, the custom tables containing fields with the LO data type, the tables containing fields with the BYTEA will be converted.
To convert system tables:
Update repository version in the repository manager: in Linux OS, Windows OS.
Delete unnecessary data on the DBMS server:
VACUUM FULL
Update permissions of users and user groups in the security manager.
After executing the operations the system tables containing fields with the LO type will be converted into the tables containing fields with the BYTEA data type. Source data is saved during table conversion.
See also: