

SELECT nvl (max(query), 0) FROM audit.bck_stl_load_commits ,status ,curtime ,file_format ,is_partial ,start_offset Select userid ,query,slice,name ,filename ,byte_offset ,lines_scanned,errors ,status ,curtime ,file_format ,is_partial ,start_offset) (userid ,query,slice,name ,filename ,byte_offset ,lines_scanned,errors INSERT INTO "audit"."bck_stl_load_commits" SELECT nvl (max(query),0) FROM audit.bck_stl_query Select userid, query, label, xid, pid, database, querytxt, starttime, endtime , Userid, query, label, xid, pid, database, querytxt, starttime, endtime ,Īborted, insert_pristine, concurrency_scaling_status Where org.userid = aud.userid and org.session_userid=aud.session_userid and org.xid = aud.xid and org.pid=aud.pid and org.query=aud.query SELECT 1 FROM audit.bck_svl_stored_proc_messages aud

Loglevel, loglevel_text, message, linenum ,querytxt, label, aborted Select userid, session_userid, pid, xid, query, recordtime , Loglevel_text,message,linenum ,querytxt,label,aborted )

Userid,session_userid,pid ,xid,query ,recordtime ,loglevel , Insert into audit.bck_svl_stored_proc_messages Where org.userid = aud.userid and org.session_userid=aud.session_userid and aud.xid = org.xid and org.query=aud.query SELECT 1 FROM "audit"."bck_svl_stored_proc_call" aud Starttime,endtime ,aborted ,from_sp_call) Userid,session_userid,query ,label,xid,pid ,database,querytxt, Insert into "audit"."bck_svl_stored_proc_call"
#Redshift space utilization query by schema update#
This will ensure the process is running correctly and you can identify any issues with the process.Īfter these tables have been created, I created following stored procedure to update these tables periodically with data in system tables/views: CREATE OR REPLACE PROCEDURE p_copy_svl_tables () LANGUAGE plpgsql AS $$ This allows me to capture the time when a record was inserted in any of the backup tables. This last column has a default value of the current system datetime stamp. I created backup tables with same column names as currently present in the system log views, with an additional last column, modifieddatetime. Raw_field_value character(1024) encode lzo, Loglevel_text character varying(9) encode lzo,Ĭoncurrency_scaling_status integer encode az64,ĬREATE TABLE "audit"."bck_stl_load_commits"Ĭurtime timestamp without time zone encode az64,ĬREATE TABLE "audit"."bck_stl_load_errors" Recordtime timestamp without time zone encode az64, Modifieddatetime timestamp without time zone NOT NULL encode az64 default getdate()ĬREATE TABLE "audit"."bck_svl_stored_proc_messages" Starttime timestamp without time zone encode az64,Įndtime timestamp without time zone encode az64, I created new schema and named it "Audit", using following script: CREATE SCHEMA audit Īfter creating the schema, I created five tables to backup the five SVL / STL views in audit schema: CREATE TABLE "audit"."bck_svl_stored_proc_call" I and my manager were interested only in only the five main system log views/tables to copy to the new backup tables using a AWS Redshift scheduled query.Ī list of the view is available at this link: My Solution Therefore I thought to create a simple solution and share it with community. To find solution, I searched the Internet, however apart from few Q&As, I could not find complete solutions from to implement. If you want to retain the log data, you will need to periodically copy it to other tables or unload it to Amazon S3." "To manage disk space, the STL log views only retain approximately two to five days of log history, depending on log usage and available disk space. Recently I had a discussion with my manager about the need to save the log data that is stored in the STL log views, as I found the following line in the AWS Redshift documentation:
