pikesaku’s blog

個人的な勉強メモです。記載内容について一切の責任は持ちません。

ログ画像化(線表示2)

pikesaku.hatenablog.com
上記との違いはクエリ文字列の情報もURLと同じ方法で反映した点。
赤線がURL
青線がクエリ文字列
 
クエリ文字列のキー文字列をハッシュ化。バリュー文字列は無視。
クエリが複数ある場合は、キー文字列をソートし&で繋げたものをハッシュ化。
 
例)以下ログの場合(クエリ1つ)

192.168.56.1 - - [08/Jul/2018:12:33:46 +0900] "GET /wp/wp-login.php?action=lostpassword HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"

"action"をハッシュ化
 
例)以下ログの場合(クエリ2つ)

192.168.56.1 - - [08/Jul/2018:12:33:47 +0900] "GET /wp/wp-login.php?action=lostpassword&hoge=fuga HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"

"action&hoge"をハッシュ化
 
 

コード

apache_log_trans_to_image.py

# -*- coding:utf-8 -*-
import argparse
import apache_log_trans_to_image_lib as alti

parser = argparse.ArgumentParser(description='apache log to graph')
parser.add_argument('log', help='log file', type=argparse.FileType('r'))
parser.add_argument('--hash', help='define hash type', type=str, choices=['md5', 'sha256'], default='md5')
parser.add_argument('--unit', help='unit of urls', type=int, default=10)
args = parser.parse_args()


if __name__ == '__main__':
    data = alti.get_data(args.log, args.unit)
    data = alti.change_data_for_graph(data, args.hash)
    alti.output_graph(data, args.unit, args.hash)

apache_log_trans_to_image_lib.py

# -*- coding:utf-8 -*-


def get_data(log, unit):
    import apache_log_parser
    import itertools

    def chk_key(line):
        required_key = ('request_url_path', 'remote_host', 'request_url_query_dict')
        for key in required_key:
            if not key in line:
                return False
        return True

    def chk_ext(line):
        request_url_path = line['request_url_path']
        except_ext = ('gif', 'jpg', 'png', 'ico', 'css', 'js', 'woff', 'ttf', 'svg')
        ext = request_url_path.split('.')[-1].lower()
        if ext in except_ext:
            return False
        return True

    data = dict()
    parser = apache_log_parser.make_parser('%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"')

    for line in log:
        line = line.strip()
        line = parser(line)
        if not chk_key(line):
            continue
        if not chk_ext(line):
            continue
        host = line['remote_host']
        request_url_path = line['request_url_path']
        request_url_query = '&'.join(sorted(list(line['request_url_query_dict'].keys())))

        if host in data:
            data[host].append([request_url_path, request_url_query])
        else:
            data[host] = [[request_url_path, request_url_query]]

    for host,request_data_list in data.items():
        request_data_list = list(itertools.zip_longest(*[iter(request_data_list)]*unit))
        request_data_list[-1] = [request_data for request_data in request_data_list[-1] if request_data is not None]
        data[host] = request_data_list
    return data


def change_data_for_graph(data, h):
    changed_data = dict()
    for host,request_data_list in data.items():
        units_of_nums = list()
        for part in request_data_list:
            nums = list()
            for line in part:
                request_url_num = trans_str_to_num(line[0], h)
                request_query_num = trans_str_to_num(line[1], h)
                nums.append([request_url_num, request_query_num])
            units_of_nums.append(nums)
        changed_data[host] = units_of_nums
    return changed_data


def trans_str_to_num(s, h):
    import hashlib
    import re
    s = s.encode('UTF-8')
    if h == 'md5':
        m = hashlib.md5()
    if h == 'sha256':
        m = hashlib.sha256()
    m.update(s)
    h = m.hexdigest()
    # hは16進数32桁
    # 4桁づつ、リストにする。
    # https://stackoverflow.com/questions/13673060/split-string-into-strings-by-length
    nums = [ int(i, 16) for i in re.split('(.{4})', h)[1::2] ]
    return nums


def output_graph(data, unit, h):
    import numpy as np
    import matplotlib.pyplot as plt

    if h == 'md5':
        xtick = 8
    if h == 'sha256':
        xtick = 16

    # https://stackoverflow.com/questions/24943991/change-grid-interval-and-specify-tick-labels-in-matplotlib
    for ip,units_of_nums in data.items():
        seq = 0
        for unit_of_nums in units_of_nums:
            url_nums = list()
            query_nums = list()
            for num in unit_of_nums:
                url_nums.extend(num[0])
                query_nums.extend(num[1])
            url_x, url_y = (range(len(url_nums)), url_nums)
            query_x, query_y = (range(len(query_nums)), query_nums)
            fig, ax = plt.subplots()
            major_xticks = np.arange(0, unit*xtick+1, xtick)
            minor_xticks = np.arange(0, unit*xtick+1, 1)
            major_yticks = np.arange(0, 65535+1, 10000)
            minor_yticks = np.arange(0, 65535+1, 1000)

            ax.set_xticks(major_xticks)
            ax.set_xticks(minor_xticks, minor=True)
            ax.set_yticks(major_yticks)
            ax.set_yticks(minor_yticks, minor=True)
            ax.grid(which='both')
            ax.grid(which='minor', alpha=0.2)
            ax.grid(which='major', alpha=0.8)
            plt.plot(url_x, url_y, color='red', lw=0.5)
            plt.plot(query_x, query_y, color='blue', lw=0.5)

            # 目盛を表示する場合、以下をコメントアウト
            plt.yticks(color='None')
            plt.xticks(color='None')
            #

            plt.xlim([0, unit*xtick])
            plt.ylim([0, 65535])
            plt.savefig(ip + '_' + str(seq) + '.png')
            plt.close()
            seq += 1

アウトプット

全て--unitはデフォルトの10の場合

以下ログの場合

192.168.56.1 - - [08/Jul/2018:12:33:46 +0900] "GET /wp/wp-login.php?action=lostpassword HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"

f:id:pikesaku:20181125013753p:plain
 

以下ログの場合

192.168.56.1 - - [08/Jul/2018:12:33:46 +0900] "GET /wp/wp-login.php?action=lostpassword HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"
192.168.56.1 - - [08/Jul/2018:12:33:47 +0900] "GET /wp/wp-login.php?action=lostpassword HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"

f:id:pikesaku:20181125013845p:plain
 

以下ログの場合

192.168.56.1 - - [08/Jul/2018:12:33:46 +0900] "GET /wp/wp-login.php?action=lostpassword HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"
192.168.56.1 - - [08/Jul/2018:12:33:47 +0900] "GET /wp/wp-login.php?action=lostpassword HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"
192.168.56.1 - - [08/Jul/2018:12:33:47 +0900] "GET /wp/wp-login.php?action=lostpassword&hoge=fuga HTTP/1.1" 200 2049 "http://192.168.56.101/wp/wp-login.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36"

f:id:pikesaku:20181125013914p:plain

AWS S3 SDKメモ(書き掛け)

メモ

  • 認証はAWSアカウント、IAMユーザーで行う。認証情報を記載したファイル生成が必要。
  • 一時的な認証情報の利用も可能。(認証が有効な時間が限定される)

 
 

Python SDK(boto3)メモ

AMIで利用時もboto3インストール必要

# sudo pip install boto3

 
 

サンプルコード

import boto3
import botocore
import random, string

# Create an S3 client
s3 = boto3.client('s3')

# Call S3 to list current buckets
response = s3.list_buckets()

# Get a list of all bucket names from the response
buckets = [bucket['Name'] for bucket in response['Buckets']]

# Create bucket name
bn = ''.join([random.choice(string.ascii_letters + string.digits).lower() for i in range(16)])

bn = 'vz6g2zdx3ry3dpiv'

# Check whether my-bucket exists
if bn in buckets:
    print(bn + 'already exists!')
else:
    s3.create_bucket(Bucket=bn)

# Upload file to bucket
fn = '/etc/hosts'
fn_k = fn.split('/')[-1]
s3.upload_file(fn, bn, fn_k)

# Download file from bucket
try:
    s3.download_file(bn, fn_k, 'downloaded_file')
except botocore.exceptions.ClientError as e:
    if e.response['Error']['Code'] == "404":
        print("The object does not exist.")
    else:
        raise


API種類

低レベルAPIと高レベルAPIがあり。
Clientは低レベルAPI
Resourcesは高レベルAPI
Boto3 で S3 のオブジェクトを操作する(高レベルAPIと低レベルAPI) - Qiita
 
 

CORS

Cross-Origin Resource Sharing (CORS) の略
別Webサイトからの呼び出しの場合でも、許可する設定。
Boto3でコンテンツに対し設定可能。コードは以下参照。
Configuring Amazon S3 Buckets — Boto 3 Docs 1.9.62 documentation
呼び出し元URLやメソッドなどを定義
 
 

CORS設定の取得コード

import boto3

# Create an S3 client
s3 = boto3.client('s3')

# Call S3 to get CORS configuration for selected bucket
result = s3.get_bucket_cors(Bucket='my-bucket')

設定がない場合、以下エラーになる。

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 320, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 623, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (NoSuchCORSConfiguration) when calling the GetBucketCors operation: The CORS configuration does not exist

 
 

CORS設定の取得コード

import boto3

# Create an S3 client
s3 = boto3.client('s3')

# Create the CORS configuration
cors_configuration = {
    'CORSRules': [{
        'AllowedHeaders': ['Authorization'],
        'AllowedMethods': ['GET', 'PUT'],
        'AllowedOrigins': ['*'],
        'ExposeHeaders': ['GET', 'PUT'],
        'MaxAgeSeconds': 3000
    }]
}

# Set the new CORS configuration on the selected bucket
s3.put_bucket_cors(Bucket='my-bucket', CORSConfiguration=cors_configuration)

 
 

ACL設定の取得コード

import boto3

# Create an S3 client
s3 = boto3.client('s3')

# Call to S3 to retrieve the policy for the given bucket
result = s3.get_bucket_acl(Bucket='my-bucket')
print(result)

続き
Amazon S3 Examples — Boto 3 Docs 1.9.62 documentation
のサンプルコードの
Working with Amazon S3 Bucket Policies

アクセスコントロールリスト (ACL) の概要 - Amazon Simple Storage Service
を見て、ACL詳細を調査する。

WebAPIについて

種類は大きく2つあり

SOAP

  • SimpleObject Access Protocolの略
  • リクエスト/レスポンスともにXML
  • XML形式(SOAPメッセージ)が規格で定められており、データをやりとりするプロトコルの意味もあり。
  • WSDLはWeb Services Description Languageの略でWebサービスインタフェース記述言語。SOAPメッセージを独自に定義し、API利用者、API提供者双方で保持することで、SOAPメッセージのやりとりが可能
  • サービス指向・URLは操作に対応

 例) ユーザー削除の場合 /deleteUser
 

REST

  • Representational State Transferの略
  • あくまで設計思想でありSOAPのような規約ではなし。RESTfulは設計思想に忠実であることを示す。
  • セッション・状態管理はしない。やりとりは、それ自体で完結。(完結せず次のリクエストも必要な場合、セッション・状態管理が必要)
  • リクエストはHTTPメソッド、レスポンスはXMLJSONなど。規約はなし。
  • リソース指向・URLは操作対象に対応

 例) ユーザー削除の場合 /Userに対しHTTP DELETEメソッド実行

AWS CLI S3コマンドinclude・excludeメモ

ポイント

・順番に評価され、より後で評価されたもので決定。この点を考慮しinclude・exclude両方指定した方が直感的かも。
 例1) --exclude "*" --include "*.txt" →txt拡張子だけがマッチ
 例2) --include "*.txt" --exclude "*" →全てマッチ
・*は全てにマッチ
・?は1文字にマッチ
・[文字集合]、[^文字集合]で文字クラス(?)的な指定が可能
 

aws s3 helpより抜粋

   Use of Exclude and Include Filters
       Currently, there is no support for the use of UNIX style wildcards in a
       command's  path  arguments.   However,  most  commands  have  --exclude
       "<value>" and --include  "<value>"  parameters  that  can  achieve  the
       desired  result.   These  parameters perform pattern matching to either
       exclude or include a particular file or object.  The following  pattern
       symbols are supported.

          o *: Matches everything

          o ?: Matches any single character

          o [sequence]: Matches any character in sequence

          o [!sequence]: Matches any character not in sequence

       Any  number of these parameters can be passed to a command.  You can do
       this by providing an --exclude or --include  argument  multiple  times,
       e.g.   --include  "*.txt"  --include  "*.png".  When there are multiple
       filters, the rule is the filters that appear later in the command  take
       precedence  over filters that appear earlier in the command.  For exam-
       ple, if the filter parameters passed to the command were

          --exclude "*" --include "*.txt"

       All files will be excluded from the command  except  for  files  ending
       with  .txt   However, if the order of the filter parameters was changed
       to

          --include "*.txt" --exclude "*"

       All files will be excluded from the command.

AWS CLI S3コマンドメモ

参考

aws s3 help
aws s3 機能 help
aws s3 syncするシェルスクリプトでワイルドカードでexcludeしたときのメモ - Qiita
→excludeが意図した通りに動かず、はまる。。。上記URLで解消・感謝!!!
 

最初にやったこと

・ec2ユーザーにロール割り当て
f:id:pikesaku:20181104085718p:plain
・ec2インスタンスをロールをつけて起動(Amazon Linux)
・ec2インスタンスSSHログイン(ec2-user)
 

動作確認スクリプト

#!/bin/bash

S3="aws s3"
BN=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 16 | head -n 1)
TF="./test"
TFN=$(basename $TF)
TD="./testdir"
TDN="$(basename ./testdir)"

com_exec() {
  MES="$1"
  COM="$2"
  echo "########## $MES"
  echo "[command]"
  echo "$COM"
  echo  ""
  echo "[output]"
  eval "$COM"
  echo ""
  echo ""
}

rm -rf $TF $TD

com_exec "create s3 bucket" "$S3 mb s3://$BN"
com_exec "check createed s3 bucket" "$S3 ls s3://$BN"
com_exec "make local file" "touch $TF"
com_exec "upload local file" "$S3 cp $TF s3://$BN/$TFN"
com_exec "check uploaded s3 file" "$S3 ls --recursive s3://$BN"
com_exec "download uploaded s3 file" "$S3 cp s3://$BN/$TFN $TF-dl"
com_exec "check downloaded local file" "ls $TF-dl"
com_exec "remove s3 file" "$S3 rm s3://$BN/$TFN"
com_exec "make local directory" "mkdir -p $TD"
com_exec "mv local file" "mv $TF-dl $TD/."
com_exec "upload local directory" "$S3 cp --recursive $TD s3://$BN/$TDN"
com_exec "check uploaded s3 directory" "$S3 ls --recursive s3://$BN"
com_exec "remove local directory" "rm -rf $TD"
com_exec "download s3 directory" "$S3 cp --recursive s3://$BN/$TDN $TD"
com_exec "check downloaded local directory" "find $TD"
com_exec "make local file" "touch $TD/$TF-2"
com_exec "sync local direcotry to s3" "$S3 sync $TD s3://$BN/$TDN"
com_exec "remove local file" "rm $TD/$TF-2"
com_exec "sync local direcotry to s3 with --delete" "$S3 sync --delete $TD s3://$BN/$TDN"
com_exec "make local file" "touch $TD/$TF-3 $TD/$TF-4"
com_exec "sync local direcotry to s3 with --exclude" "$S3 sync --exclude '$(basename $TF-3)' $TD s3://$BN/$TDN"
com_exec "remove s3 bucket" "$S3 rb --force s3://$BN"

動作確認スクリプト出力結果

########## create s3 bucket
[command]
aws s3 mb s3://3bihbmwqlzmh7evd

[output]
make_bucket: 3bihbmwqlzmh7evd


########## check createed s3 bucket
[command]
aws s3 ls s3://3bihbmwqlzmh7evd

[output]


########## make local file
[command]
touch ./test

[output]


########## upload local file
[command]
aws s3 cp ./test s3://3bihbmwqlzmh7evd/test

[output]
upload: ./test to s3://3bihbmwqlzmh7evd/test


########## check uploaded s3 file
[command]
aws s3 ls --recursive s3://3bihbmwqlzmh7evd

[output]
2018-11-04 03:45:48          0 test


########## download uploaded s3 file
[command]
aws s3 cp s3://3bihbmwqlzmh7evd/test ./test-dl

[output]
download: s3://3bihbmwqlzmh7evd/test to ./test-dl


########## check downloaded local file
[command]
ls ./test-dl

[output]
./test-dl


########## remove s3 file
[command]
aws s3 rm s3://3bihbmwqlzmh7evd/test

[output]
delete: s3://3bihbmwqlzmh7evd/test


########## make local directory
[command]
mkdir -p ./testdir

[output]


########## mv local file
[command]
mv ./test-dl ./testdir/.

[output]


########## upload local directory
[command]
aws s3 cp --recursive ./testdir s3://3bihbmwqlzmh7evd/testdir

[output]
upload: testdir/test-dl to s3://3bihbmwqlzmh7evd/testdir/test-dl


########## check uploaded s3 directory
[command]
aws s3 ls --recursive s3://3bihbmwqlzmh7evd

[output]
2018-11-04 03:45:50          0 testdir/test-dl


########## remove local directory
[command]
rm -rf ./testdir

[output]


########## download s3 directory
[command]
aws s3 cp --recursive s3://3bihbmwqlzmh7evd/testdir ./testdir

[output]
download: s3://3bihbmwqlzmh7evd/testdir/test-dl to testdir/test-dl


########## check downloaded local directory
[command]
find ./testdir

[output]
./testdir
./testdir/test-dl


########## make local file
[command]
touch ./testdir/./test-2

[output]


########## sync local direcotry to s3
[command]
aws s3 sync ./testdir s3://3bihbmwqlzmh7evd/testdir

[output]
upload: testdir/test-2 to s3://3bihbmwqlzmh7evd/testdir/test-2


########## remove local file
[command]
rm ./testdir/./test-2

[output]


########## sync local direcotry to s3 with --delete
[command]
aws s3 sync --delete ./testdir s3://3bihbmwqlzmh7evd/testdir

[output]
delete: s3://3bihbmwqlzmh7evd/testdir/test-2


########## make local file
[command]
touch ./testdir/./test-3 ./testdir/./test-4

[output]


########## sync local direcotry to s3 with --exclude
[command]
aws s3 sync --exclude 'test-3' ./testdir s3://3bihbmwqlzmh7evd/testdir

[output]
upload: testdir/test-4 to s3://3bihbmwqlzmh7evd/testdir/test-4


########## remove s3 bucket
[command]
aws s3 rb --force s3://3bihbmwqlzmh7evd

[output]
delete: s3://3bihbmwqlzmh7evd/testdir/test-4
delete: s3://3bihbmwqlzmh7evd/testdir/test-dl
remove_bucket: 3bihbmwqlzmh7evd

awsコマンドのヘルプ内容

どんなサブコマンドがあるかわかる

$ aws help 
AWS()                                                                    AWS()



NAME
       aws -

DESCRIPTION
       The  AWS  Command  Line  Interface is a unified tool to manage your AWS
       services.

SYNOPSIS
          aws [options] <command> <subcommand> [parameters]

       Use aws command help for information on a  specific  command.  Use  aws
       help  topics  to view a list of available help topics. The synopsis for
       each command shows its parameters and their usage. Optional  parameters
       are shown in square brackets.

OPTIONS
       --debug (boolean)

       Turn on debug logging.

       --endpoint-url (string)

       Override command's default URL with the given URL.

       --no-verify-ssl (boolean)

       By  default, the AWS CLI uses SSL when communicating with AWS services.
       For each SSL connection, the AWS CLI will verify SSL certificates. This
       option overrides the default behavior of verifying SSL certificates.

       --no-paginate (boolean)

       Disable automatic pagination.

       --output (string)

       The formatting style for command output.

       o json

       o text

       o table

       --query (string)

       A JMESPath query to use in filtering the response data.

       --profile (string)

       Use a specific profile from your credential file.

       --region (string)

       The region to use. Overrides config/env settings.

       --version (string)

       Display the version of this tool.

       --color (string)

       Turn on/off color output.

       o on

       o off

       o auto

       --no-sign-request (boolean)

       Do  not  sign requests. Credentials will not be loaded if this argument
       is provided.

       --ca-bundle (string)

       The CA certificate bundle to use when verifying SSL certificates. Over-
       rides config/env settings.

       --cli-read-timeout (int)

       The  maximum socket read time in seconds. If the value is set to 0, the
       socket read will be blocking and not timeout.

       --cli-connect-timeout (int)

       The maximum socket connect time in seconds. If the value is set  to  0,
       the socket connect will be blocking and not timeout.

AVAILABLE SERVICES
       o acm

       o alexaforbusiness

       o apigateway

       o application-autoscaling

       o appstream

       o appsync

       o athena

       o autoscaling

       o batch

       o budgets

       o ce

       o cloud9

       o clouddirectory

       o cloudformation

       o cloudfront

       o cloudhsm

       o cloudhsmv2

       o cloudsearch

       o cloudsearchdomain

       o cloudtrail

       o cloudwatch

       o codebuild

       o codecommit

       o codepipeline

       o codestar

       o cognito-identity

       o cognito-idp

       o cognito-sync

       o comprehend

       o configservice

       o configure

       o cur

       o datapipeline

       o dax

       o deploy

       o devicefarm

       o directconnect

       o discovery

       o dms

       o ds

       o dynamodb

       o dynamodbstreams

       o ec2

       o ecr

       o ecs

       o efs

       o elasticache

       o elasticbeanstalk

       o elastictranscoder

       o elb

       o elbv2

       o emr

       o es

       o events

       o firehose

       o gamelift

       o glacier

       o glue

       o greengrass

       o guardduty

       o health

       o help

       o history

       o iam

       o importexport

       o inspector

       o iot

       o iot-data

       o iot-jobs-data

       o kinesis

       o kinesis-video-archived-media

       o kinesis-video-media

       o kinesisanalytics

       o kinesisvideo

       o kms

       o lambda

       o lex-models

       o lex-runtime

       o lightsail

       o logs

       o machinelearning

       o marketplace-entitlement

       o marketplacecommerceanalytics

       o mediaconvert

       o medialive

       o mediapackage

       o mediastore

       o mediastore-data

       o meteringmarketplace

       o mgh

       o mobile

       o mq

       o mturk

       o opsworks

       o opsworks-cm

       o organizations

       o pinpoint

       o polly

       o pricing

       o rds

       o redshift

       o rekognition

       o resource-groups

       o resourcegroupstaggingapi

       o route53

       o route53domains

       o s3

       o s3api

       o sagemaker

       o sagemaker-runtime

       o sdb

       o serverlessrepo

       o servicecatalog

       o servicediscovery

       o ses

       o shield

       o sms

       o snowball

       o sns

       o sqs

       o ssm

       o stepfunctions

       o storagegateway

       o sts

       o support

       o swf

       o translate

       o waf

       o waf-regional

       o workdocs

       o workmail

       o workspaces

       o xray

SEE ALSO
       o aws help topics



                                                                         AWS()