

For Buffer size, enter your preferred buffer size for your function.For AWS Lambda function, enter your function ARN.For Data transformation, select Enabled.For Delivery stream name, enter a name.On the Kinesis Data Firehose console, choose Create a delivery stream.In the Log exports section, select the log types that you want to publish.Ĭreate the Firehose delivery stream with the following steps:.Choose the instance that you want to publish logs to CloudWatch for, then choose Modify.On the Amazon RDS console, choose Databases in the navigation pane.To enable Amazon RDS to write to CloudWatch Logs, complete the following steps: Enable Amazon RDS to write to CloudWatch Logs For instructions on creating your S3 bucket, refer to Creating a bucket. Next, we create a bucket to store the audit files generated by Kinesis Data Firehose. For AWS layers, choose the layer AWSLambdaPowertoolsPythonV2.On the Lambda console, choose Layers in the navigation pane.'data': base64.b64encode(str_data.encode('utf-8')).decode('utf-8')Ĭ('Successfully processed records.'.format(len(event))) # Do custom processing on the payload here Uncompressed_payload = compress(compressed_payload)ĭecoded = uncompressed_code('utf-8') ,Ĭompressed_payload = base64.b64decode(record) StandardizedMessage = re.sub(r'(\s)+', ' ', message, flags=re.MULTILINE) The CWLtoKinesisFirehoseRole allows CloudWatch Logs to stream data to Kinesis Data Firehose. You need three roles: CWLtoKinesisFirehoseRole, FirehosetoS3Role, and CWLtofirehose-lambda-exec-role. For more information, refer to Creating IAM roles. You can create your IAM roles using the IAM console, the AWS Command Line Interface (AWS CLI), the AWS Tools for PowerShell, or the IAM API. We strongly recommend that you set this up in a non-production instance and run end-to-end validations before you implement this solution in a production environment. Refer to AWS Pricing for more information. We provide more details in the following section.īecause this solution involves setting up and using AWS resources, it will incur costs in your account. The necessary roles to interact with the services.For instructions, refer to Logging at the session and object level with the pgAudit extension. Auditing set up for the PostgreSQL database.For instructions, refer to Create and Connect to a PostgreSQL Database. An AWS account with proper privileges to create and configure the necessary infrastructure.To follow along with this post, you must have the following prerequisites: Run Athena queries to identify database performance issues.Set up an AWS Glue database, crawler, and table.Enable Amazon RDS to write to CloudWatch Logs.Create an S3 bucket for storing the files generated by Kinesis Data Firehose.Create a Lambda function to decrypt the streams.Create AWS Identity and Access Management (IAM) roles.The high-level steps to implement this solution are as follows: The following diagram shows the solution architecture, which uses the following AWS services to analyze PostgreSQL audit files:
AWS RDS POSTGRES VERSION FREE
This solution makes it easier to process and query audit data and can free up database resources from storing and processing audit data.
AWS RDS POSTGRES VERSION HOW TO
We also show you how to process the audit data using AWS Glue and query it with Amazon Athena. In this post, we show you how to capture and store audit data from an Amazon Relational Database Service (Amazon RDS) for PostgreSQL database and store it in Amazon Simple Storage Service (Amazon S3). You also need to meet your organization’s information security regulations and standards. You might be required to capture, store, and retain the audit data for the long term.

Database audits are one of the important compliance requirements that organizations need to meet.
